首页 文章

尝试使用spark-cassandra-connector将Cassandra行映射到案例对象时,无法找到此类型错误的RowReaderFactory

提问于
浏览
1

我试图得到一个简单的例子,使用Apache Spark 1.1.1,Cassandra 2.0.11和spark-cassandra-connector(v1.1.0)将行从Cassandra映射到scala案例类 . 我已经在spark-cassandra-connector github页面,planetcassandra.org,datastax上查看了文档,并且一般都在搜索;但没有发现其他人遇到这个问题 . 所以这里......

使用sbt(0.13.5),scala 2.10.4,针对Cassandra 2.0.11的spark 1.1.1构建一个微小的spark应用程序 . 从spark-cassandra-connector文档建模示例,以下两行在我的IDE中出现错误并且无法编译 .

case class SubHuman(id:String, firstname:String, lastname:String, isGoodPerson:Boolean)
val foo = sc.cassandraTable[SubHuman]("nicecase", "human").select("id","firstname","lastname","isGoodPerson").toArray

eclipse提出的简单错误是:

No RowReaderFactory can be found for this type

编译错误只是稍微冗长:

> compile
[info] Compiling 1 Scala source to /home/bkarels/dev/simple-case/target/scala-2.10/classes...
[error] /home/bkarels/dev/simple-case/src/main/scala/com/bradkarels/simple/SimpleApp.scala:82: No RowReaderFactory can be found for this type
[error]     val foo = sc.cassandraTable[SubHuman]("nicecase", "human").select("id","firstname","lastname","isGoodPerson").toArray
[error]                                          ^
[error] one error found
[error] (compile:compile) Compilation failed
[error] Total time: 1 s, completed Dec 10, 2014 9:01:30 AM
>

Scala来源:

package com.bradkarels.simple

import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import com.datastax.spark.connector._
import com.datastax.spark.connector.rdd._
// Likely don't need this import - but throwing darts hits the bullseye once in a while...
import com.datastax.spark.connector.rdd.reader.RowReaderFactory

object CaseStudy {

  def main(args: Array[String]) {
    val conf = new SparkConf(true)
      .set("spark.cassandra.connection.host", "127.0.0.1")

    val sc = new SparkContext("spark://127.0.0.1:7077", "simple", conf)

    case class SubHuman(id:String, firstname:String, lastname:String, isGoodPerson:Boolean)
    val foo = sc.cassandraTable[SubHuman]("nicecase", "human").select("id","firstname","lastname","isGoodPerson").toArray
  }
}

删除麻烦的线条,一切都编译好,装配工作,我可以正常执行其他Spark操作 . 例如,如果我删除问题行并插入:

val rdd:CassandraRDD[CassandraRow] = sc.cassandraTable("nicecase", "human")

我收回了RDD并按预期使用它 . 也就是说,我怀疑我的sbt项目,汇编插件等对这些问题没有贡献 . 可以在github here上找到工作源(减去根据预期映射到案例类的新尝试) .

但是,为了更彻底,我的build.sbt:

name := "Simple Case"

version := "0.0.1"

scalaVersion := "2.10.4"

libraryDependencies ++= Seq(
    "org.apache.spark" %% "spark-core" % "1.1.1",
    "org.apache.spark" %% "spark-sql" % "1.1.1",
    "com.datastax.spark" %% "spark-cassandra-connector" % "1.1.0" withSources() withJavadoc()
  )

所以问题是我错过了什么?希望这是愚蠢的,但如果你遇到这个并且可以帮助我解决这个令人费解的小问题,我将非常感激 . 如果有任何其他有助于排除故障的详细信息,请与我们联系 .

谢谢 .

1 回答

  • 3

    这可能是我对Scala的新见解,但我通过将case类声明移出main方法解决了这个问题 . 所以简化的源现在看起来像这样:

    package com.bradkarels.simple
    
    import org.apache.spark.SparkContext
    import org.apache.spark.SparkContext._
    import org.apache.spark.SparkConf
    import com.datastax.spark.connector._
    import com.datastax.spark.connector.rdd._
    
    object CaseStudy {
    
      case class SubHuman(id:String, firstname:String, lastname:String, isGoodPerson:Boolean)
    
      def main(args: Array[String]) {
        val conf = new SparkConf(true)
          .set("spark.cassandra.connection.host", "127.0.0.1")
    
        val sc = new SparkContext("spark://127.0.0.1:7077", "simple", conf)
    
        val foo = sc.cassandraTable[SubHuman]("nicecase", "human").select("id","firstname","lastname","isGoodPerson").toArray
      }
    }
    

    完整的源代码(更新和修复)可以在github上找到https://github.com/bradkarels/spark-cassandra-to-scala-case-class

相关问题