首页 文章

在Spark Scala中重命名DataFrame的列名

提问于
浏览
63

我试图转换Spark-Scala中 DataFrame 的所有 Headers /列名称 . 截至目前,我想出了以下代码,它只替换了一个列名 .

for( i <- 0 to origCols.length - 1) {
  df.withColumnRenamed(
    df.columns(i), 
    df.columns(i).toLowerCase
  );
}

3 回答

  • 187

    如果结构是平的:

    val df = Seq((1L, "a", "foo", 3.0)).toDF
    df.printSchema
    // root
    //  |-- _1: long (nullable = false)
    //  |-- _2: string (nullable = true)
    //  |-- _3: string (nullable = true)
    //  |-- _4: double (nullable = false)
    

    你能做的最简单的事就是使用 toDF 方法:

    val newNames = Seq("id", "x1", "x2", "x3")
    val dfRenamed = df.toDF(newNames: _*)
    
    dfRenamed.printSchema
    // root
    // |-- id: long (nullable = false)
    // |-- x1: string (nullable = true)
    // |-- x2: string (nullable = true)
    // |-- x3: double (nullable = false)
    

    如果要重命名单个列,可以使用 selectalias

    df.select($"_1".alias("x1"))
    

    可以很容易地推广到多列:

    val lookup = Map("_1" -> "foo", "_3" -> "bar")
    
    df.select(df.columns.map(c => col(c).as(lookup.getOrElse(c, c))): _*)
    

    withColumnRenamed

    df.withColumnRenamed("_1", "x1")
    

    foldLeft 一起使用以重命名多个列:

    lookup.foldLeft(df)((acc, ca) => acc.withColumnRenamed(ca._1, ca._2))
    

    使用嵌套结构( structs ),一种可能的选择是通过选择整个结构来重命名:

    val nested = spark.read.json(sc.parallelize(Seq(
        """{"foobar": {"foo": {"bar": {"first": 1.0, "second": 2.0}}}, "id": 1}"""
    )))
    
    nested.printSchema
    // root
    //  |-- foobar: struct (nullable = true)
    //  |    |-- foo: struct (nullable = true)
    //  |    |    |-- bar: struct (nullable = true)
    //  |    |    |    |-- first: double (nullable = true)
    //  |    |    |    |-- second: double (nullable = true)
    //  |-- id: long (nullable = true)
    
    @transient val foobarRenamed = struct(
      struct(
        struct(
          $"foobar.foo.bar.first".as("x"), $"foobar.foo.bar.first".as("y")
        ).alias("point")
      ).alias("location")
    ).alias("record")
    
    nested.select(foobarRenamed, $"id").printSchema
    // root
    //  |-- record: struct (nullable = false)
    //  |    |-- location: struct (nullable = false)
    //  |    |    |-- point: struct (nullable = false)
    //  |    |    |    |-- x: double (nullable = true)
    //  |    |    |    |-- y: double (nullable = true)
    //  |-- id: long (nullable = true)
    

    请注意,它可能会影响 nullability 元数据 . 另一种可能性是通过强制重命名:

    nested.select($"foobar".cast(
      "struct<location:struct<point:struct<x:double,y:double>>>"
    ).alias("record")).printSchema
    
    // root
    //  |-- record: struct (nullable = true)
    //  |    |-- location: struct (nullable = true)
    //  |    |    |-- point: struct (nullable = true)
    //  |    |    |    |-- x: double (nullable = true)
    //  |    |    |    |-- y: double (nullable = true)
    

    要么:

    import org.apache.spark.sql.types._
    
    nested.select($"foobar".cast(
      StructType(Seq(
        StructField("location", StructType(Seq(
          StructField("point", StructType(Seq(
            StructField("x", DoubleType), StructField("y", DoubleType)))))))))
    ).alias("record")).printSchema
    
    // root
    //  |-- record: struct (nullable = true)
    //  |    |-- location: struct (nullable = true)
    //  |    |    |-- point: struct (nullable = true)
    //  |    |    |    |-- x: double (nullable = true)
    //  |    |    |    |-- y: double (nullable = true)
    
  • 3

    对于那些对PySpark版本感兴趣的人(实际上它在Scala中是相同的 - 请参阅下面的评论):

    merchants_df_renamed = merchants_df.toDF(
        'merchant_id', 'category', 'subcategory', 'merchant')
    
    merchants_df_renamed.printSchema()
    

    结果:

    root | - merchant_id:integer(nullable = true)| - category:string(nullable = true)| - subcategory:string(nullable = true)| - merchant:string(nullable = true)

  • 13
    def aliasAllColumns(t: DataFrame, p: String = "", s: String = ""): DataFrame =
    {
      t.select( t.columns.map { c => t.col(c).as( p + c + s) } : _* )
    }
    

    如果不明显,则为每个当前列名添加前缀和后缀 . 当您有两个具有一个或多个具有相同名称的列的表时,这可能很有用,并且您希望加入它们但仍能够消除结果表中列的歧义 . 如果在“普通”SQL中有类似的方法,那肯定会很好 .

相关问题