首页 文章

不能在pyspark加入两个RDD

提问于
浏览
0

我有两个名为df1,df2的数据框,但是当我尝试加入它时,它无法完成 . 让我为每个数据帧和每个样本输出设置我的模式 .

df1
Out[160]: DataFrame[BibNum: string, CallNumber: string, CheckoutDateTime: string, ItemBarcode: string, ItemCollection: string, ItemType: string]

Row(BibNum=u'BibNum', CallNumber=u'CallNumber', CheckoutDateTime=u'CheckoutDateTime', ItemBarcode=u'ItemBarcode', ItemCollection=u'ItemCollection', ItemType=u'ItemType'),
 Row(BibNum=u'1842225', CallNumber=u'MYSTERY ELKINS1999', CheckoutDateTime=u'05/23/2005 03:20:00 PM', ItemBarcode=u'10035249209', ItemCollection=u'namys', ItemType=u'acbk')]



df2    
DataFrame[Author: string, BibNum: string, FloatingItem: string, ISBN: string, ItemCollection: string, ItemCount: string, ItemLocation: string, ItemType: string, PublicationDate: string, Publisher: string, ReportDate: string, Subjects: string, Title: string]

[Row(Author=u'Author', BibNum=u'BibNum', FloatingItem=u'FloatingItem', ISBN=u'ISBN', ItemCollection=u'ItemCollection', ItemCount=u'ItemCount', ItemLocation=u'ItemLocation', ItemType=u'ItemType', PublicationDate=u'PublicationYear', Publisher=u'Publisher', ReportDate=u'ReportDate', Subjects=u'Subjects', Title=u'Title'),
 Row(Author=u"O'Ryan| Ellie", BibNum=u'3011076', FloatingItem=u'Floating', ISBN=u'1481425730| 1481425749| 9781481425735| 9781481425742', ItemCollection=u'ncrdr', ItemCount=u'1', ItemLocation=u'qna', ItemType=u'jcbk', PublicationDate=u'2014', Publisher=u'Simon Spotlight|', ReportDate=u'09/01/2017', Subjects=u'Musicians Fiction| Bullfighters Fiction| Best friends Fiction| Friendship Fiction| Adventure and adventurers Fiction', Title=u"A tale of two friends / adapted by Ellie O'Ryan ; illustrated by Tom Caulfield| Frederick Gardner| Megan Petasky| and Allen Tam.")]

当我尝试使用此命令加入两个时:

df3=df1.join(df2, df1.BibNum==df2.BibNum)

,没有错误,但数据框看起来像重叠列:

DataFrame[BibNum: string, CallNumber: string, CheckoutDateTime: string, ItemBarcode: string, ItemCollection: string, ItemType: string, Author: string, BibNum: string, FloatingItem: string, ISBN: string, ItemCollection: string, ItemCount: string, ItemLocation: string, ItemType: string, PublicationDate: string, Publisher: string, ReportDate: string, Subjects: string, Title: string]

最后,在我得到df3(加入数据帧)之后,当我尝试df3.take(2)时,错误: list index out of range 发生了 . 我想要的结果是我想通过计算哪些天(checkoutDateTime)被借用最多的书来找出哪些ItemLocation可用 .

1 回答

  • 0

    您需要在公共列上加入数据框,否则它将从2个不同的数据帧生成2个具有相同名称的冲突列 .

    common_cols = [x for x in df1.columns if x in df2.columns]
    df3 = df1.join(df2, on=common_cols, how='outer')
    

    您可以根据需要使用外部联接或左联接 . 请不要就同一问题提出多个问题 . 您已经获得了积极的答案:when trying to join two tables, happening IndexError: list index out of range in pyspark

相关问题