我试图为我的sparkJob编写scala测试用例,但它不起作用,并给出了InvocationTargetException .

我正在尝试创建新的sqlContext并从我的localbox读取一个json文件 . 我正在使用eclipse .

sc = new SparkContext(conf)
sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)

完整的例外是:

java.lang.reflect.InvocationTargetException,位于org.apache的org.apache.spark.io.CompressionCodec $ .createCodec(CompressionCodec.scala:67)的java.lang.reflect.Constructor.newInstance(Constrructor.java:423) . spark.io.CompressionCodec $ .createCodec(CompressionCodec.scala:60)at org.apache.spark.broadcast.TorrentBroadcast.org $ apache $ spark $ broadcast $ TorrentBroadcast $$ setConf(TorrentBroadcast.scala:73)at org.apache . spark.broadcast.TorrentBroadcast . (TorrentBroadcast.scala:80)org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager..scala: 63)org.apache.spark.SparkContext.broadcast(SparkContext.scala:1318)atg.apache.spark.SparkContext $$ anonfun $ hadoopFile $ 1.apply(SparkContext.scala:1006)at org.apache.spark.SparkContext在org.apache的org.apache.spark.rdd.RDDOperationScope $ .withScope(RDDOperationScope.scala:147)上的$$ anonfun $ hadoopFile $ 1.apply(SparkContext.scala:1003) .spark.rdd.RDDOperationScope $ .withScope(RDDOperationScope.scala:108)atg.apache.spark.SparkCContext.withScope(SparkContext.scala:700)at org.apache.spark.SparkContext.hadoopFile(SparkContext.scala:1003) at org.apache.spark.SparkContext $$ anonfun $ textFile $ 1.appply(SparkContext.scala:818)org.apache.spark.SparkContext $$ anonfun $ textFile $ 1.apply(SparkContext.scala:816))org . apache.spark.rdd.RDDOperationScope $ .withScope(RDDOperationScope.scala:147)at org.apache.spark.rdd.RDDOperationScope $ .withScope(RDDOperationScope.scala:108)at org.apache.spark.SparkContext.withScope(SparkContext . scala:700)位于com.databricks.spark的com.databricks.spark.csv.util.TextFile $ .withCharset(TextFile.scalaa:30)的org.apache.spark.SparkContext.texxtFile(SparkContext.scala:816) . csv.DefaultSource $$ anonfuun $ createRelation $ 1.apply(DefaultSource.scala:146)at com.databricks.spark.csv.DefaultSource $$ anonfun $ createRelation $ 1.apply(DDefaultSource.scala:146)at com.databricks.spark . csv.CsvRelation.firstLine $ LZY计算(CsvRelation.scalaa:265)位于com.databricks.spark.csv.CsvRelation.firstLine(CsvRelation.scala:263)的com.databricks.spark.csv.CsvRelation.tokenRdd(CsvRelation.scala:89)com.databricks位于org.apache.spark.sql的org.apache.spark.sql.execution.datasources.DataSourceStrategy $$ anonfun $ 4.apply(DataSourceStrategy.scala:60)的.spark.csv.CsvRRelation.buildScan(CsvRelation.scala:173) .execution.datasources.DataSourceStrategy $$ anonfun $ 4.apply(DataSourceStrategy.scala:60)at org.apache.spark.sql.execution.datasources.DataSourceStrategy $$$ anonfun $ pruneFilterProject $ 1.apply(DataSourceStrategy.scala:279)at at org.apache.spark.sql.execution.datasources.DataSourceStrategy $$ anonfun $ pruneFilterProject $ 1.适用(DataSourceStrattegy.scala:278)在org.apache.spark.sql.execution.datasources.DataSourceStrategy $ .pruneFilterProjectRaw(DataSourceStrategy.scala: 3100)org.apache.spark.sql.exe执行.datasources . 位于org.apache.spark.sqll的org.apache.spark.sql.catalyst.planning.QuueryPlanner $$ anonfun $ 1.apply(QueryPlanner.scala:58)的ion.datasources.DataSourceStrategy $ .apply(DataSourceStrategy.scala:556) atc.apache.spark . .QueryPlanner.plan(QueryPlanner.scala:59)位于org.apache.spark.sql.catalyst.planning.QueryPlanner.planLater(QueryPlanner.scala:54)org.apache.spark.sql.execution.SparkStrategies $ AAggregation $ . 应用(SparkStrategies.scala:235)org.apache.spark.sql.catalyst.pllanning.QueryPlanner $$ anonfun $ 1.apply(QueryPlanner.scala:58)at org.apache.spark.sql.catalyst.planning.QueryPlanner $ $ anonfun $ 1.apply(QueryPlaanner.scala:58)at scala.collection.Iterator $$ anon $ 13.hasNext(Iterator.scala:371)at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner . scala:59)at org.apache.spark.sql.SQLContext $ QueryExecuution.sparkPlan $ lzycompute(SQLContext.scala:920)at org.apache.spark.sql.SQLContext $ QueryExecution.sparkPlan(SQLContext.scala:918)at atorg.apache.spark.sql.SQLContext $ QueryExecution.executedPlan $ lzycompute(SQLContext.scala:924)位于org.apache.spark的org.apache.spark.sql.SQLContext $ QueryExecution.executedPlan(SQLContext.scala:924) . sql.execution.SQLExecution $ .withNewExecutionId(SQLExecution.scala:53)org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:1904)org.apache.spark.sql.DataFrame.collect(DataFrame . scala:1385))org.apache.spark.sql.DataFrame.count(DataFrame.scala:14033)at com.amazon.smt.ec.datapopulator.common.DataFrameOperationsTest $$ anonfun $ 1.apply $ mcV $ sp(DataFrameOperationsTest) .scala:55)在com.amazon.smt.ec.datapopulator.common.DataFrameOperatiionsTest $$ anonfun $ 1.apply(DataFrameOperationsTest.scala:47)at com..amazon.smt.ec.datapopulator.common.DataFrameOperationsTest $$ anonfun $ 1.apply(DataFrameOperationsTest.scala:47)at org.scalatest.Transformer $$ anonfun $ apply $ 1.apply(Transformer.scala:22)at org.scalatest.Transformer $$ anonfun $ apply $ 1.apply(Transformer.scala: 22)在org.scalatest.Out来自Org.scalatest.OutcomeOf(OutcomeOf.scala:104)的org.scalatest.Transformer.apply(Transformer.scala:22)org.scalatest.Transformer上的$ class.outcomeOf(OutcomeOf.scala:85) . 在Org.scalatest.FunSuite上的org.scalatest.FunSuiteLike $$ anon $ 1.apply(FunSuuiteLike.scala:158)org.scalatest.Suite $ class.withFixture(Suite.scala:1121)申请(Transformer.scala:20)org.scalatest.FunSuite .withFixture(FunSuite.scala:1559)org.scalatest.FuunSuiteLike $ class.invokeWithFixture $ 1(FunSuiteLike.scala:155)org.scalatest.FunSuiteLike $$ anonfun $ runTest $ 1.apply(FunSuiteLike.scala:167)at org .scalatest.FunSuiteLike $$ anonfun $ runTest $ 1.apply(FunSuiteLike.sscala:167)orrg.scalatest.SuperEngine.runTestImpl(Engine.scala:306)at org.scalatest.FunSuiteLike $ class.runTest(FunSuiteLike.scala:167) )org.scalatest.FunSuite.runTestt(FunSuite.scala:1559)org.scalatest..FunSuiteLike $$ anonfun $ runTests $ 1.apply(FunSuiteLike.scala:200)at org.scalatest.FunSuiteLike $$ anonfun $ runTests $ 1 .apply(FunSuiteLike.scala:200)在org.scala test.SuperEngine $$ anonfun $ traaverseSubNodes $ 1 $ 1.apply(Engine.scala:413)at org.scalatest.SuperEngine $$$ anonfun $ traverseSubNodes $ 1 $ 1.apply(Engine.scala:401)at scala.collection.immutable.List .foreach(LList.scala:318)org.scalatest.SuperEngine.traverseSubNodes $ 1(Engine.scala:401)org.scalatest.SuperEngine.org $ scalatest $ SuperEngine $$ runTestsInBranch(Engine.scala:396)at org . scalatest.SuperEngine.runTestsImpl(Engine.scala:483)org.scalatest.FunSuiteLike $ class.runTests(FunSuiteLike.scala:200)org.scalatest.FunSuite.runTests(FunSuite.scala:15559)at org.scalatest.Suite $ class.run(Suite.scala:1423)at org.scalatest.FunSSuite.org $ scalatest $ FunSuiteLike $$ super $ run(FunSuite.scala:1559)at org.scalatest.FunSuiteLike $$ anonfun $ run $ 1.apply( FunSuiteLike.scala:204)org.scalatest.FunSuiteLike $ anonfun $ run $ 1.apply(FunSuiteLike.scala:204)org.scalatest.SuperEngine.runImpl(Engine.scala:545)at org.scalatest.FunSuiteLike $ class .run(FuunSuiteLike.scala:204)at com.amazon.smt.ec.datapopulator .common.DataFrameOperationsTest.org $ scalatest $ BeforeAndAfterAll $$ super $ run(DataFrameOperationsTest.scala:21)org.scalatest.BBeforeAndAfterAll $ class.liftedTree1 $ 1(BeforeAndAfterAll.scala:257)at org.scalatest.BeforeAndAfterAll $ class.run (BeforeAndAfterAll.scala:256)at com.amazon.smt.ec.datapopulator.common.DattaFrameOperationsTest.run(DataFrameOperationsTest.scala:21)at oorg.scalatest.junit.JUnitRunner.run(JUnitRunner.scala:99)引起: org.apache.spaark.io.SnappyCompressionCodec上的java.lang.IllegalArgumentException . (CompressionCodec.scala:151)

有人能帮我吗?我在这里想念的是什么?