首页 文章

Spark:java.lang.UnsupportedOperationException:找不到java.time.LocalDate的编码器

提问于
浏览
3

我正在使用2.1.1版编写Spark应用程序 . 使用LocalDate参数调用方法时,以下代码出错?

Exception in thread "main" java.lang.UnsupportedOperationException: No Encoder found for java.time.LocalDate
- field (class: "java.time.LocalDate", name: "_2")
- root class: "scala.Tuple2"
        at org.apache.spark.sql.catalyst.ScalaReflection$.org$apache$spark$sql$catalyst$ScalaReflection$$serializerFor(ScalaReflection.scala:602)
        at org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$9.apply(ScalaReflection.scala:596)
        at org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$9.apply(ScalaReflection.scala:587)
        at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
        at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
        at scala.collection.immutable.List.foreach(List.scala:381)
        at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
        at scala.collection.immutable.List.flatMap(List.scala:344)
        at org.apache.spark.sql.catalyst.ScalaReflection$.org$apache$spark$sql$catalyst$ScalaReflection$$serializerFor(ScalaReflection.scala:587)
....
val date : LocalDate = ....
val conf = new SparkConf()
val sc = new SparkContext(conf.setAppName("Test").setMaster("local[*]"))
val sqlContext = new org.apache.spark.sql.SQLContext(sc)

val itemListJob = new ItemList(sqlContext, jdbcSqlConn)
import sqlContext.implicits._ 
val processed = itemListJob.run(rc, priority).select("id").map(d => {
  runJob.run(d, date) 
})

class ItemList(sqlContext: org.apache.spark.sql.SQLContext, jdbcSqlConn: String) {
  def run(date: LocalDate) = {
    import sqlContext.implicits._ 
    sqlContext.read.format("jdbc").options(Map(
      "driver" -> "com.microsoft.sqlserver.jdbc.SQLServerDriver",
      "url" -> jdbcSqlConn,
      "dbtable" -> s"dbo.GetList('$date')"
    )).load()
    .select("id") 
    .as[Int] 
  }
}

Update: 我将 runJob.run() 的返回类型更改为元组 (int, java.sql.Date) ,并将 .map(...) 的lambda中的代码更改为

val processed = itemListJob.run(rc, priority).select("id").map(d => {
  val (a,b) = runJob.run(d, date) 
  $"$a, $b"
})

现在错误已更改为

[error] C:\....\scala\main.scala:40: Unable to find encoder for type stored in a Dataset.  Primitive types (Int, String, etc) and Product types (case classes) are supported by importing spark.implicits._  Support for serializing other types will be added in future releases. 
[error]     val processed = itemListJob.run(rc, priority).map(d => { 
[error]                                                      ^ 
[error] one error found 
[error] (compile:compileIncremental) Compilation failed

1 回答

  • 0

    对于自定义数据集类型,您可以使用Kyro serde框架,只要您的数据实际上是可序列化的(aka . implements Serializable) . 这是使用Kyro的一个例子:Spark No Encoder found for java.io.Serializable in Map[String, java.io.Serializable] .

    Kyro总是被推荐,因为它更快,并且与Java serde框架兼容 . 你绝对可以选择Java native serde(ObjectWriter / ObjectReader),但速度要慢得多 .

    像上面的评论一样,SparkSQL在 sqlContext.implicits._ 下附带了许多有用的编码器,但这并不能涵盖所有内容,所以你可能需要插入自己的编码器 .

    就像我说的,你的自定义数据必须是可序列化的,并且根据https://docs.oracle.com/javase/8/docs/api/java/time/LocalDate.html,它实现了Serializable接口,所以你肯定在这里很好 .

相关问题