我正在运行Kafka服务器 . (当我使用命令bin / kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning时,它给出了我主题中的所有ma数据) .

当我想在spark中测试示例JavaDirectKafkaWordCount以了解它是如何工作时我得到以下错误:

$ ./run-example streaming.JavaDirectKafkaWordCount localhost:2181 test使用Spark的默认log4j配置文件:org / apache / spark / log4j-defaults.properties 16/08/17 11:19:33 INFO StreamingExamples:将日志级别设置为[WARN]用于流媒体示例 . 要覆盖,请将自定义log4j.properties添加到类路径中 . 16/08/17 11:19:33 WARN NativeCodeLoader:无法为你的平台加载native-hadoop库...使用内置的java类适用于16/08/17 11:19:33 WARN Utils:你的主机名,localhost .localdomain解析为环回地址:127.0.0.1;改为使用10.66.212.132(在接口enp5s0上)16/08/17 11:19:33 WARN Utils:如果需要绑定到另一个地址,请设置SPARK_LOCAL_IP线程“main”中的异常org.apache.spark.SparkException:java.io .EOFException:从通道读取时收到-1,套接字可能已关闭 . at org.apache.spark.streaming.kafka.KafkaCluster $$ anonfun $ checkErrors $ 1.apply(KafkaCluster.scala:366)at org.apache.spark.streaming.kafka.KafkaCluster $$ anonfun $ checkErrors $ 1.apply(KafkaCluster . scala:366)在org.apache.spark.streaming的org.apache.spark.streaming.kafka.KafkaCluster $ .checkErrors(KafkaCluster.scala:365)的scala.util.Either.fold(Either.scala:97) . kafka.KafkaUtils $ .getFromOffsets(KafkaUtils.scala:222)atg.apache.spark.stream.kafka.KafkaUtils $ .createDirectStream(KafkaUtils.scala:484)at org.apache.spark.streaming.kafka.KafkaUtils $ .createDirectStream (KafkaUtils.scala:607)org.apache.spark.streaming.kafka.KafkaUtils.createDirectStream(KafkaUtils.scala)位于sun的org.apache.spark.examples.streaming.JavaDirectKafkaWordCount.main(JavaDirectKafkaWordCount.java:71) . sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccess)的sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)中的reflect.NativeMethodAccessorImpl.invoke0(Native Method) orImpl.java:43)在java.lang.reflect.Method.invoke(Method.java:497)org.apache.spark.deploy.SparkSubmit $ .org $ apache $ spark $ deploy $ SparkSubmit $$ runMain(SparkSubmit . scala:731)在org.apache的org.apache.spark.deploy.SparkSubmit $ .doRunMain $ 1(SparkSubmit.scala:181)org.apache.spark.deploy.SparkSubmit $ .submit(SparkSubmit.scala:206) . spark.deploy.SparkSubmit $ .main(SparkSubmit.scala:121)at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

我想知道错误的含义以及我如何能够解决它 .

非常感谢您的关注和帮助 .