我有Apache Spark 2.1.1 . 我正在使用Spark SQL . 这是我初始化Spark的方式:

SparkSession spark = SparkSession
           .builder()
           .master("spark://ipaddress")
           .appName("Java Spark Hive Example")
           .enableHiveSupport()
           .getOrCreate();
spark.sql("CREATE TABLE IF NOT EXISTS Person(name STRING , id INT) row format delimited fields terminated BY ',' lines terminated BY '\n' tblproperties(skip.header.line.count=1)

我正进入(状态:

线程“main”中的异常org.apache.spark.sql.AnalysisException:org.apache.hadoop.hive.ql.metadata.HiveException:MetaException(消息:文件:/ user / hive / warehouse / person不是目录或无法创造一个); org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:98)位于org.apache.spark.sql的org.apache.spark.sql.hive.HiveExternalCatalog.createTable(HiveExternalCatalog.scala:191)位于org.apache.spark.sql.execution.command的org.apache.spark.sql.execution.command.CreateTableCommand.run(tables.scala:116)的.catalyst.catalog.SessionCatalog.createTable(SessionCatalog.scala:248) .ExecutedCommandExec.sideEffectResult $ lzycompute(commands.scala:58)atg.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)at org.apache.spark.sql.execution.command.ExecutedCommandExec .doExecute(commands.scala:74)atg.apache.spark.sql.execution.SparkPlan $$ anonfun $ execute $ 1.apply(SparkPlan.scala:114)at org.apache.spark.sql.execution.SparkPlan $$ anonfun $执行$ 1.apply(SparkPlan.scala:114)org.apache.spark.sql.execution.SparkPlan $$ anonfun $ executeQuery $ 1.apply(SparkPlan.scala:135)at org.apache.spark.rdd.RDDOperationScope $ .withScope(RDDOperationSco pe.scala:151)

我在spark / conf下的core-site.xml文件中将默认文件系统设置为HDFS

<configuration>
  <property>
    <name>fs.default.name</name>
      <value>hdfs://localhost:9000</value>
  </property>
</configuration>

我不明白为什么Spark在localSystem而不是HDFS下创建仓库 .