Home Articles

DataLoader中的Pytorch RNN内存分配错误

Asked
Viewed 1726 times
0

我在Pytorch写一个RNN . 我有以下代码行:

data_loader = torch.utils.data.DataLoader(
    data,
    batch_size=args.batch_size,
    shuffle=True,
    num_workers=args.num_workers,
    drop_last=True)

如果我将num_workers设置为0,则会出现分段错误 . 如果我将num_workers设置为> 0,那么我有回溯:

Traceback (most recent call last):
File "rnn_model.py", line 352, in <module>
train_model(train_data, dev_data, test_data, model, args)
File "rnn_model.py", line 212, in train_model
loss = run_epoch(train_data, True, model, optimizer, args)
File "rnn_model.py", line 301, in run_epoch
for batch in tqdm.tqdm(data_loader):
File "/home/username/miniconda3/lib/python2.7/site-packages/tqdm/_tqdm.py", 
line 872, in __iter__
for obj in iterable:
File "/home/username/miniconda3/lib/python2.7/site-
packages/torch/utils/data/dataloader.py", line 303, in __iter__
return DataLoaderIter(self)
File "/home/username/miniconda3/lib/python2.7/site-
packages/torch/utils/data/dataloader.py", line 162, in __init__
w.start()
File "/home/username/miniconda3/lib/python2.7/multiprocessing/process.py", line 130, in start
self._popen = Popen(self)
File "/home/username/miniconda3/lib/python2.7/multiprocessing/forking.py", line 121, in __init__
self.pid = os.fork()
OSError: [Errno 12] Cannot allocate memory

2 Answers

  • 1

    我的猜测是,无论批量大小和通过args的工作人员的数量是什么值都被铸造或误解 .

    请打印出来并确保获得通过的值 .

  • 0

    您正在尝试加载的数据超过系统可以容纳的RAM . 您可以尝试仅加载部分数据,也可以使用/写入仅加载当前批次所需数据的数据加载器 .

Related