首页 文章

如何将Vector拆分为列 - 使用PySpark

提问于
浏览
22

Context: 我有一个包含2列的 DataFrame :word和vector . "vector"的列类型是 VectorUDT .

一个例子:

word    |  vector

assert  | [435,323,324,212...]

我希望得到这个:

word   |  v1 | v2  | v3 | v4 | v5 | v6 ......

assert | 435 | 5435| 698| 356|....

Question:

如何使用pyspark为每个维度拆分包含多个列中的向量的列?

提前致谢

1 回答

  • 36

    一种可能的方法是转换为RDD和从RDD转换:

    from pyspark.ml.linalg import Vectors
    
    df = sc.parallelize([
        ("assert", Vectors.dense([1, 2, 3])),
        ("require", Vectors.sparse(3, {1: 2}))
    ]).toDF(["word", "vector"])
    
    def extract(row):
        return (row.word, ) + tuple(row.vector.toArray().tolist())
    
    df.rdd.map(extract).toDF(["word"])  # Vector values will be named _2, _3, ...
    
    ## +-------+---+---+---+
    ## |   word| _2| _3| _4|
    ## +-------+---+---+---+
    ## | assert|1.0|2.0|3.0|
    ## |require|0.0|2.0|0.0|
    ## +-------+---+---+---+
    

    另一种解决方案是创建UDF:

    from pyspark.sql.functions import udf, col
    from pyspark.sql.types import ArrayType, DoubleType
    
    def to_array(col):
        def to_array_(v):
            return v.toArray().tolist()
        return udf(to_array_, ArrayType(DoubleType()))(col)
    
    (df
        .withColumn("xs", to_array(col("vector")))
        .select(["word"] + [col("xs")[i] for i in range(3)]))
    
    ## +-------+-----+-----+-----+
    ## |   word|xs[0]|xs[1]|xs[2]|
    ## +-------+-----+-----+-----+
    ## | assert|  1.0|  2.0|  3.0|
    ## |require|  0.0|  2.0|  0.0|
    ## +-------+-----+-----+-----+
    

    对于Scala等效,请参阅Spark Scala: How to convert Dataframe[vector] to DataFrame[f1:Double, ..., fn: Double)] .

相关问题