首页 文章

Spark SQL删除空格

提问于
浏览
0

我有一个简单的Spark程序,它读取JSON文件并发出CSV文件 . 在JSON数据中,值包含前导和尾随空格,当我发出CSV时,前导和尾随空格都消失了 . 有没有办法可以保留空间 . 我尝试了很多选项,如ignoreTrailingWhiteSpace,ignoreLeadingWhiteSpace,但没有运气

input.json

{"key" : "k1", "value1": "Good String", "value2": "Good String"}
{"key" : "k1", "value1": "With Spaces      ", "value2": "With Spaces      "}
{"key" : "k1", "value1": "with tab\t", "value2": "with tab\t"}

output.csv

_corrupt_record,key,value1,value2
,k1,Good String,Good String
,k1,With Spaces,With Spaces
,k1,with tab,with tab

expected.csv

_corrupt_record,key,value1,value2
,k1,Good String,Good String
,k1,With Spaces      ,With Spaces      
,k1,with tab\t,with tab\t

我的代码:

public static void main(String[] args) {
    SparkSession sparkSession = SparkSession
            .builder()
            .appName(TestSpark.class.getName())
            .master("local[1]").getOrCreate();

    SparkContext context = sparkSession.sparkContext();
    context.setLogLevel("ERROR");
    SQLContext sqlCtx = sparkSession.sqlContext();
    System.out.println("Spark context established");

    List<StructField> kvFields = new ArrayList<>();
    kvFields.add(DataTypes.createStructField("_corrupt_record", DataTypes.StringType, true));
    kvFields.add(DataTypes.createStructField("key", DataTypes.StringType, true));
    kvFields.add(DataTypes.createStructField("value1", DataTypes.StringType, true));
    kvFields.add(DataTypes.createStructField("value2", DataTypes.StringType, true));
    StructType employeeSchema = DataTypes.createStructType(kvFields);

    Dataset<Row> dataset =
            sparkSession.read()
                    .option("inferSchema", false)
                    .format("json")
                    .schema(employeeSchema)
                    .load("D:\\dev\\workspace\\java\\simple-kafka\\key_value.json");

    dataset.createOrReplaceTempView("sourceView");
    sqlCtx.sql("select * from sourceView")
            .write()
            .option("header", true)
            .format("csv")
            .save("D:\\dev\\workspace\\java\\simple-kafka\\output\\" + UUID.randomUUID().toString());
    sparkSession.close();
}

Update

添加了POM依赖项

<dependencies>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-core_2.10</artifactId>
        <version>2.1.0</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-sql_2.10</artifactId>
        <version>2.1.0</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-sql-kafka-0-10_2.10</artifactId>
        <version>2.1.0</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-streaming_2.10</artifactId>
        <version>2.1.0</version>
    </dependency>
    <dependency>
        <groupId>org.slf4j</groupId>
        <artifactId>slf4j-log4j12</artifactId>
        <version>1.7.22</version>
    </dependency>
</dependencies>

3 回答

  • 6

    CSV编写器默认修剪前导和尾随空格 . 你可以把它关掉

    sqlCtx.sql("select * from sourceView").write.
           option("header", true).
           option("ignoreLeadingWhiteSpace",false). // you need this
           option("ignoreTrailingWhiteSpace",false). // and this
           format("csv").save("/my/file/location")
    

    这对我有用 . 如果它对您不起作用,您可以发布您尝试过的内容吗?您使用的是哪种火花版本?如果我没记错的话,他们去年就推出了这个功能 .

  • 2

    对于Apache Spark 2.2,您只需使用 "ignoreLeadingWhiteSpace""ignoreTrailingWhiteSpace" 选项(详见@Roberto Congiu的答案)

    我想它应该是较低的Apache Spark版本的默认行为 - 我不确定 .

    对于Apache Spark 1.3,您可以使用 "univocity" parserLib来明确指定它:

    df.write
      .option("parserLib","univocity")
      .option("ignoreLeadingWhiteSpace","false")
      .option("ignoreTrailingWhiteSpace","false")
      .format("csv")
    

    旧的“不正确”答案 - 显示如何摆脱整个数据框中的前导和尾随空格和制表符(在所有列中)

    这是一个scala解决方案:

    来源DF:

    scala> val df = spark.read.json("file:///temp/a.json")
    df: org.apache.spark.sql.DataFrame = [key: string, value1: string ... 1 more field]
    
    scala> df.show
    +---+-----------------+-----------------+
    |key|           value1|           value2|
    +---+-----------------+-----------------+
    | k1|      Good String|      Good String|
    | k1|With Spaces      |With Spaces      |
    | k1|        with tab   |        with tab       |
    +---+-----------------+-----------------+
    

    解:

    import org.apache.spark.sql.functions._
    
    val df2 = df.select(df.columns.map(c => regexp_replace(col(c),"(^\\s+|\\s+$)","").alias(c)):_*)
    

    结果:

    scala> df2.show
    +---+----------+----------+
    |key|    value1|    value2|
    +---+----------+----------+
    | k1|GoodString|GoodString|
    | k1|WithSpaces|WithSpaces|
    | k1|   withtab|   withtab|
    +---+----------+----------+
    

    PS它应该在Java Spark中非常相似......

  • 3
    // hope these two options can solve your question
    spark.read.json(inputPath).write
        .option("ignoreLeadingWhiteSpace",false)
        .option("ignoreTrailingWhiteSpace", false)
        .csv(outputPath)
    

    您可以查看以下链接以获取更多信息

    https://issues.apache.org/jira/browse/SPARK-18579

    https://github.com/apache/spark/pull/17310

    谢谢

相关问题