首页 文章

如何在Pandas中的超大型数据框架上创建数据透视表

提问于
浏览
14

我需要从大约6000万行的数据集创建一个2000列的数据透视表,大约30-50万行 . 我尝试在100,000行的块中进行旋转,这是有效的,但是当我尝试通过执行.append()后跟.groupby('someKey') . sum()重新组合DataFrame时,我的所有内存都被占用了和python最终崩溃 .

如何利用有限的RAM数量对这么大的数据进行调整?

编辑:添加示例代码

下面的代码包括各种测试输出,但最后一个打印是我们真正感兴趣的 . 请注意,如果我们将segMax更改为3而不是4,则代码将产生正确输出的误报 . 主要问题是如果shipmentid条目不在sum(wawa)所看到的每个块中,则它不会出现在输出中 .

import pandas as pd
import numpy as np
import random
from pandas.io.pytables import *
import os

pd.set_option('io.hdf.default_format','table') 

# create a small dataframe to simulate the real data.
def loadFrame():
    frame = pd.DataFrame()
    frame['shipmentid']=[1,2,3,1,2,3,1,2,3] #evenly distributing shipmentid values for testing purposes
    frame['qty']= np.random.randint(1,5,9) #random quantity is ok for this test
    frame['catid'] = np.random.randint(1,5,9) #random category is ok for this test
    return frame

def pivotSegment(segmentNumber,passedFrame):
    segmentSize = 3 #take 3 rows at a time
    frame = passedFrame[(segmentNumber*segmentSize):(segmentNumber*segmentSize + segmentSize)] #slice the input DF

    # ensure that all chunks are identically formatted after the pivot by appending a dummy DF with all possible category values
    span = pd.DataFrame() 
    span['catid'] = range(1,5+1)
    span['shipmentid']=1
    span['qty']=0

    frame = frame.append(span)

    return frame.pivot_table(['qty'],index=['shipmentid'],columns='catid', \
                             aggfunc='sum',fill_value=0).reset_index()

def createStore():

    store = pd.HDFStore('testdata.h5')
    return store

segMin = 0
segMax = 4

store = createStore()
frame = loadFrame()

print('Printing Frame')
print(frame)
print(frame.info())

for i in range(segMin,segMax):
    segment = pivotSegment(i,frame)
    store.append('data',frame[(i*3):(i*3 + 3)])
    store.append('pivotedData',segment)

print('\nPrinting Store')   
print(store)
print('\nPrinting Store: data') 
print(store['data'])
print('\nPrinting Store: pivotedData') 
print(store['pivotedData'])

print('**************')
print(store['pivotedData'].set_index('shipmentid').groupby('shipmentid',level=0).sum())
print('**************')
print('$$$')
for df in store.select('pivotedData',chunksize=3):
    print(df.set_index('shipmentid').groupby('shipmentid',level=0).sum())

print('$$$')
store['pivotedAndSummed'] = sum((df.set_index('shipmentid').groupby('shipmentid',level=0).sum() for df in store.select('pivotedData',chunksize=3)))
print('\nPrinting Store: pivotedAndSummed') 
print(store['pivotedAndSummed'])

store.close()
os.remove('testdata.h5')
print('closed')

1 回答

  • 13

    您可以使用HDF5 / pytables进行追加 . 这使它远离RAM .

    使用table format

    store = pd.HDFStore('store.h5')
    for ...:
        ...
        chunk  # the chunk of the DataFrame (which you want to append)
        store.append('df', chunk)
    

    现在,您可以一次性将其作为DataFrame读取(假设此DataFrame可以适合内存!):

    df = store['df']
    

    您还可以查询,仅获取DataFrame的子部分 .

    旁白:你还应该买更多的RAM,它很便宜 .


    编辑:您可以从商店iteratively分组/总和,因为这是"map-reduces"在块上:

    # note: this doesn't work, see below
    sum(df.groupby().sum() for df in store.select('df', chunksize=50000))
    # equivalent to (but doesn't read in the entire frame)
    store['df'].groupby().sum()
    

    Edit2:如上所述使用sum实际上并不适用于pandas 0.16(我认为它在0.15.2中完成),而是可以将reduceadd一起使用:

    reduce(lambda x, y: x.add(y, fill_value=0),
           (df.groupby().sum() for df in store.select('df', chunksize=50000)))
    

    在python 3中你必须import reduce from functools .

    也许它更像pythonic /可读写为:

    chunks = (df.groupby().sum() for df in store.select('df', chunksize=50000))
    res = next(chunks)  # will raise if there are no chunks!
    for c in chunks:
        res = res.add(c, fill_value=0)
    

    如果性能差/如果存在大量新组,那么可能最好将res作为正确大小的零(通过获取唯一组密钥,例如通过循环遍历组),然后添加到位 .

相关问题