首页 文章

在尝试收集RDD时,object不可迭代,pyspark [duplicate]

提问于
浏览
0

这个问题在这里已有答案:

我是Spark的新手 . 当我尝试从顶级外部函数传递到RDD_old.reduceByKey后从RDD_new收集结果时发生此错误 .

首先,我定义了一个treeStruct:

class treeStruct(object):
    def __init__(self,node,edge):
        self.node = nodeDictionary
        self.edge = edgeDictionary

之后,我将两个treeStructs转换为带有sc.parallelize的RDD:

RDD = sc.parallelize([treeStruct1,treeStruct2])

然后,我将驱动程序代码之外定义的顶级函数传递给reduceByKey . 该函数包含几个“for”迭代,类似于:

def func(tree1,tree2):
    if conditions according to certain attributes of the RDD:
        for dummy:
             do something to the RDD attributes
    if conditions according to certain attributes of the RDD:
        for dummy2:
             do something to the RDD attributes

当我尝试收集结果时,发生了以下错误:

Driver stacktrace:
17/03/07 13:38:37 INFO DAGScheduler: Job 0 failed: collect at /mnt/hgfs/VMshare/ditto-dev/pkltreeSpark_RDD.py:196, took 3.088593 s
Traceback (most recent call last):
  File "/mnt/hgfs/VMshare/pkltreeSpark_RDD.py", line 205, in <module>
startTesting(1,1)
  File "/mnt/hgfs/VMshare/pkltreeSpark_RDD.py", line 196, in startTesting
tmp = matchingOutcome.collect()
  File "/usr/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 809, in collect
  File "/usr/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
  File "/usr/spark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/usr/spark/python/lib/pyspark.zip/pyspark/worker.py", line 174, in main
process()
  File "/usr/spark/python/lib/pyspark.zip/pyspark/worker.py", line 169, in process
serializer.dump_stream(func(split_index, iterator), outfile)
  File "/usr/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 2407, in pipeline_func
  File "/usr/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 346, in func
  File "/usr/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1828, in combineLocally
  File "/usr/spark/python/lib/pyspark.zip/pyspark/shuffle.py", line 236, in mergeValues
    for k, v in iterator:
TypeError: 'treeStruct' object is not iterable

困惑 . 这是否意味着我不应该在函数内部使用“for”迭代?或者我不应该像我现在所做的那样构建我的对象?

此外,此错误与如何迭代RDD的某些属性相关,而不是关于键值对 .

任何帮助都会很棒!

1 回答

  • -2

    我终于明白这个问题是由我的类定义引入的,我希望迭代这个没有任何迭代器的treeStruct,并且它是不可迭代的 . 因此,可以通过向类添加迭代器来解决此问题 .

    class treeStruct(object):
        def __init__(self,node,edge):
            self.node = nodeDictionary
            self.edge = edgeDictionary
    
        # add an iterator
        def __iter__(self):
            for x in [self.node,self.edge]:
                yield x
    

    无论如何,谢谢你们的帮助! :)

相关问题