首页 文章

当批量大小> 1且具有可变长度数据时,RNN不进行训练

提问于
浏览
1

我正在实现一个简单的RNN网络,它预测一些可变长度时间序列数据的1/0 . 网络首先将训练数据馈送到LSTM单元,然后使用线性层进行分类 .

通常,我们会使用迷你批次来训练网络 . 但是,问题是当我使用 batch_size > 1时,这个简单的RNN网络没有训练 .

我设法创建一个可以重现问题的最小代码示例 . 如果你在第95行设置 batch_size=1 ,网络训练成功,但如果你设置了 batch_size=2 ,网络根本没有训练,那么损失就会蹦蹦跳跳 . (需要python3,pytorch> = 0.4.0)

import numpy as np
import random
import torch
import torch.nn as nn
import torch.optim as optim
from torch.nn.utils.rnn import pad_sequence, pack_padded_sequence


class ToyDataLoader(object):

    def __init__(self, batch_size):
        self.batch_size = batch_size
        self.index = 0
        self.dataset_size = 10

        # generate 10 random variable length training samples,
        # each time step has 1 feature dimension
        self.X = [
            [[1], [1], [1], [1], [0], [0], [1], [1], [1]],
            [[1], [1], [1], [1]],
            [[0], [0], [1], [1]],
            [[1], [1], [1], [1], [1], [1], [1]],
            [[1], [1]],
            [[0]],
            [[0], [0], [0], [0], [0], [0], [0]],
            [[1]],
            [[0], [1]],
            [[1], [0]]
        ]

        # assign labels for the toy traning set
        self.y = torch.LongTensor([0, 0, 0, 0, 0, 1, 1, 1, 1, 1])

    def __len__(self):
        return self.dataset_size // self.batch_size

    def __iter__(self):
        return self

    def __next__(self):
        if self.index + self.batch_size > self.dataset_size:
            self.index = 0
            raise StopIteration()
        if self.index == 0:  # shufle the dataset
            tmp = list(zip(self.X, self.y))
            random.shuffle(tmp)
            self.X, self.y = zip(*tmp)
            self.y = torch.LongTensor(self.y)
        X = self.X[self.index: self.index + self.batch_size]
        y = self.y[self.index: self.index + self.batch_size]
        self.index += self.batch_size
        return X, y


class NaiveRNN(nn.Module):
    def __init__(self):
        super(NaiveRNN, self).__init__()
        self.lstm = nn.LSTM(1, 128)
        self.linear = nn.Linear(128, 2)

    def forward(self, X):
        '''
        Parameter:
            X: list containing variable length training data
        '''

        # get the length of each seq in the batch
        seq_lengths = [len(x) for x in X]

        # convert to torch.Tensor
        seq_tensor = [torch.Tensor(seq) for seq in X]

        # sort seq_lengths and seq_tensor based on seq_lengths, required by torch.nn.utils.rnn.pad_sequence
        pairs = sorted(zip(seq_lengths, seq_tensor),
                       key=lambda pair: pair[0], reverse=True)
        seq_lengths = torch.LongTensor([pair[0] for pair in pairs])
        seq_tensor = [pair[1] for pair in pairs]

        # padded_seq shape: (seq_len, batch_size, feature_size)
        padded_seq = pad_sequence(seq_tensor)

        # pack them up
        packed_seq = pack_padded_sequence(padded_seq, seq_lengths.numpy())

        # feed to rnn
        packed_output, (ht, ct) = self.lstm(packed_seq)

        # linear classification layer
        y_pred = self.linear(ht[-1])

        return y_pred


def main():
    trainloader = ToyDataLoader(batch_size=2)  # not training at all! !!
    # trainloader = ToyDataLoader(batch_size=1) # it converges !!!

    model = NaiveRNN()
    criterion = nn.CrossEntropyLoss()
    optimizer = optim.Adadelta(model.parameters(), lr=1.0)

    for epoch in range(30):
        # switch to train mode
        model.train()

        for i, (X, labels) in enumerate(trainloader):

            # compute output
            outputs = model(X)
            loss = criterion(outputs, labels)

            # measure accuracy and record loss
            _, predicted = torch.max(outputs, 1)
            accu = (predicted == labels).sum().item() / labels.shape[0]

            # compute gradient and do SGD step
            optimizer.zero_grad()
            loss.backward()

            optimizer.step()

            print('Epoch: [{}][{}/{}]\tLoss {:.4f}\tAccu {:.3f}'.format(
                epoch, i, len(trainloader), loss, accu))


if __name__ == '__main__':
    main()

batch_size=1 时的示例输出:

...
Epoch: [28][7/10]       Loss 0.1582     Accu 1.000
Epoch: [28][8/10]       Loss 0.2718     Accu 1.000
Epoch: [28][9/10]       Loss 0.0000     Accu 1.000
Epoch: [29][0/10]       Loss 0.2808     Accu 1.000
Epoch: [29][1/10]       Loss 0.0000     Accu 1.000
Epoch: [29][2/10]       Loss 0.0001     Accu 1.000
Epoch: [29][3/10]       Loss 0.0149     Accu 1.000
Epoch: [29][4/10]       Loss 0.1445     Accu 1.000
Epoch: [29][5/10]       Loss 0.2866     Accu 1.000
Epoch: [29][6/10]       Loss 0.0170     Accu 1.000
Epoch: [29][7/10]       Loss 0.0869     Accu 1.000
Epoch: [29][8/10]       Loss 0.0000     Accu 1.000
Epoch: [29][9/10]       Loss 0.0498     Accu 1.000

batch_size=2 时的示例输出:

...
Epoch: [27][2/5]        Loss 0.8051     Accu 0.000
Epoch: [27][3/5]        Loss 1.2835     Accu 0.000
Epoch: [27][4/5]        Loss 1.0782     Accu 0.000
Epoch: [28][0/5]        Loss 0.5201     Accu 1.000
Epoch: [28][1/5]        Loss 0.6587     Accu 0.500
Epoch: [28][2/5]        Loss 0.3488     Accu 1.000
Epoch: [28][3/5]        Loss 0.5413     Accu 0.500
Epoch: [28][4/5]        Loss 0.6769     Accu 0.500
Epoch: [29][0/5]        Loss 1.0434     Accu 0.000
Epoch: [29][1/5]        Loss 0.4460     Accu 1.000
Epoch: [29][2/5]        Loss 0.9879     Accu 0.000
Epoch: [29][3/5]        Loss 1.0784     Accu 0.500
Epoch: [29][4/5]        Loss 0.6051     Accu 1.000

我搜索了很多材料,仍然无法找出原因 .

1 回答

  • 0

    我认为一个主要问题是你将 ht[-1] 作为输入传递给线性层 .
    ht[-1] 将包含上一个时间步的状态,该状态仅对具有最大长度的输入有效 .

    要解决此问题,您需要解压缩输出并获得与相应输入的最后长度相对应的输出 .
    以下是我们需要改变的方式:

    # feed to rnn
    packed_output, (ht, ct) = self.lstm(packed_seq)
    
    # Unpack output
    lstm_out, seq_len = pad_packed_sequence(packed_output)
    
    # get vector containing last input indices
    last_input = seq_len - 1
    
    indices = torch.linspace(0, (seq_len.size(0)-1), steps=seq_len.size(0)).long()
    
    # linear classification layer
    y_pred = self.linear(lstm_out[last_input, indices, :])
    return y_pred
    

    我仍然无法使其与其余参数融合,但这应该有所帮助 .

相关问题