我们创建了一个连接到Hbase的Hive外部表,但是当我们尝试访问Spark-Shell中的表时,通过执行如下简单查询,

Scala> sqlContext.sql(“从[Hive-External-Table]中选择*) . count()

日志的后半部分如下所示:

15/10/13 17:13:08 INFO ClientCnxn: Opening socket connection to server BData-h1/10.10.10.82:2181. Will not attempt to authenticate using SASL (unknown error)
15/10/13 17:13:08 INFO ClientCnxn: Socket connection established, initiating session, client: /10.10.10.82:34108, server: BData-h1/10.10.10.82:2181
15/10/13 17:13:08 INFO ClientCnxn: Session establishment complete on server BData-h1/10.10.10.82:2181, sessionid = 0x1505f6805c70281, negotiated timeout = 60000
15/10/13 17:13:09 INFO RegionSizeCalculator: Calculating region sizes for table "analytics_demo".
15/10/13 17:13:24 INFO SparkContext: Starting job: collect at SparkPlan.scala:83
15/10/13 17:13:24 INFO DAGScheduler: Got job 0 (collect at SparkPlan.scala:83) with 1 output partitions (allowLocal=false)
15/10/13 17:13:24 INFO DAGScheduler: Final stage: Stage 0(collect at SparkPlan.scala:83)
15/10/13 17:13:24 INFO DAGScheduler: Parents of final stage: List()
15/10/13 17:13:24 INFO DAGScheduler: Missing parents: List()
15/10/13 17:13:24 INFO DAGScheduler: Submitting Stage 0 (MapPartitionsRDD[7] at map at SparkPlan.scala:83), which has no missing parents
15/10/13 17:13:24 INFO MemoryStore: ensureFreeSpace(16512) called with curMem=601741, maxMem=278302556
15/10/13 17:13:24 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 16.1 KB, free 264.8 MB)
15/10/13 17:13:24 INFO MemoryStore: ensureFreeSpace(8676) called with curMem=618253, maxMem=278302556
15/10/13 17:13:24 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 8.5 KB, free 264.8 MB)
15/10/13 17:13:24 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on BData-h1:35111 (size: 8.5 KB, free: 265.4 MB)
15/10/13 17:13:24 INFO BlockManagerMaster: Updated info of block broadcast_1_piece0
15/10/13 17:13:24 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:839
15/10/13 17:13:24 INFO DAGScheduler: Submitting 1 missing tasks from Stage 0 (MapPartitionsRDD[7] at map at SparkPlan.scala:83)
15/10/13 17:13:24 INFO YarnScheduler: Adding task set 0.0 with 1 tasks
15/10/13 17:13:25 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, BData-h2, RACK_LOCAL, 1424 bytes)
15/10/13 17:13:25 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on BData-h2:37159 (size: 8.5 KB, free: 530.3 MB)
15/10/13 17:13:27 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on BData-h2:37159 (size: 42.8 KB, free: 530.2 MB)

系统就在那里挂了 .

如果我们查询Hive托管表,它将返回结果就好了 .

我们尝试了很多东西,但没有运气 . 有没有人可以为我们解释这个问题?