我使用AWS EC2指南安装了Spark,我可以使用 bin/pyspark
脚本启动程序以获得spark提示,并且还可以成功执行Quick Start quide .
但是,我不能为我的生活弄清楚如何在每个命令后停止所有详细的 INFO
日志记录 .
我已经在 conf
文件夹中的 log4j.properties
文件中尝试了以下代码中的几乎所有可能的场景(我注释,设置为OFF),在该文件夹中,我从每个节点启动应用程序,没有做任何事情 . 执行每个语句后,我仍然会打印日志 INFO
语句 .
我对这应该如何工作非常困惑 .
#Set everything to be logged to the console log4j.rootCategory=INFO, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n
# Settings to quiet third party logs that are too verbose
log4j.logger.org.eclipse.jetty=WARN
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO
当我使用 SPARK_PRINT_LAUNCH_COMMAND
时,这是我的完整类路径:
Spark命令:/Library/Java/JavaVirtualMachines/jdk1.8.0_05.jdk/Contents/Home/bin/java -cp:/root/spark-1.0.1-bin-hadoop2/conf:/root/spark-1.0 . 1彬hadoop2 / CONF:/root/spark-1.0.1-bin-hadoop2/lib/spark-assembly-1.0.1-hadoop2.2.0.jar:/root/spark-1.0.1-bin-hadoop2/ LIB / DataNucleus将-API JDO-3.2.1.jar:/root/spark-1.0.1-bin-hadoop2/lib/datanucleus-core-3.2.2.jar:/root/spark-1.0.1-bin- hadoop2 / lib / datanucleus-rdbms-3.2.1.jar -XX:MaxPermSize = 128m -Djava.library.path = -Xms512m -Xmx512m org.apache.spark.deploy.SparkSubmit spark-shell --class org.apache.spark .repl.Main
spark-env.sh
的内容:
#!/usr/bin/env bash
# This file is sourced when running various Spark programs.
# Copy it as spark-env.sh and edit that to configure Spark for your site.
# Options read when launching programs locally with
# ./bin/run-example or ./bin/spark-submit
# - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
# - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node
# - SPARK_PUBLIC_DNS, to set the public dns name of the driver program
# - SPARK_CLASSPATH=/root/spark-1.0.1-bin-hadoop2/conf/
# Options read by executors and drivers running inside the cluster
# - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node
# - SPARK_PUBLIC_DNS, to set the public DNS name of the driver program
# - SPARK_CLASSPATH, default classpath entries to append
# - SPARK_LOCAL_DIRS, storage directories to use on this node for shuffle and RDD data
# - MESOS_NATIVE_LIBRARY, to point to your libmesos.so if you use Mesos
# Options read in YARN client mode
# - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
# - SPARK_EXECUTOR_INSTANCES, Number of workers to start (Default: 2)
# - SPARK_EXECUTOR_CORES, Number of cores for the workers (Default: 1).
# - SPARK_EXECUTOR_MEMORY, Memory per Worker (e.g. 1000M, 2G) (Default: 1G)
# - SPARK_DRIVER_MEMORY, Memory for Master (e.g. 1000M, 2G) (Default: 512 Mb)
# - SPARK_YARN_APP_NAME, The name of your application (Default: Spark)
# - SPARK_YARN_QUEUE, The hadoop queue to use for allocation requests (Default: ‘default’)
# - SPARK_YARN_DIST_FILES, Comma separated list of files to be distributed with the job.
# - SPARK_YARN_DIST_ARCHIVES, Comma separated list of archives to be distributed with the job.
# Options for the daemons used in the standalone deploy mode:
# - SPARK_MASTER_IP, to bind the master to a different IP address or hostname
# - SPARK_MASTER_PORT / SPARK_MASTER_WEBUI_PORT, to use non-default ports for the master
# - SPARK_MASTER_OPTS, to set config properties only for the master (e.g. "-Dx=y")
# - SPARK_WORKER_CORES, to set the number of cores to use on this machine
# - SPARK_WORKER_MEMORY, to set how much total memory workers have to give executors (e.g. 1000m, 2g)
# - SPARK_WORKER_PORT / SPARK_WORKER_WEBUI_PORT, to use non-default ports for the worker
# - SPARK_WORKER_INSTANCES, to set the number of worker processes per node
# - SPARK_WORKER_DIR, to set the working directory of worker processes
# - SPARK_WORKER_OPTS, to set config properties only for the worker (e.g. "-Dx=y")
# - SPARK_HISTORY_OPTS, to set config properties only for the history server (e.g. "-Dx=y")
# - SPARK_DAEMON_JAVA_OPTS, to set config properties for all daemons (e.g. "-Dx=y")
# - SPARK_PUBLIC_DNS, to set the public dns name of the master or workers
export SPARK_SUBMIT_CLASSPATH="$FWDIR/conf"
13 回答
我这样做的方式是:
在我运行
spark-submit
脚本的位置将
INFO
更改为您想要的任何级别的日志记录,然后运行spark-submit
我想继续使用日志记录(Python的日志记录工具),您可以尝试拆分应用程序和Spark的配置:
只需在您的spark-submit命令中添加以下参数即可
这仅暂时覆盖该作业的系统值 . 检查确切的属性名称(log4jspark.root.logger)来自log4j.properties文件 .
希望这会有所帮助,欢呼!
只需在spark目录中执行以下命令:
编辑log4j.properties:
在第一行替换:
通过:
保存并重新启动shell . 它适用于OS X上的Spark 1.1.0和Spark 1.5.1 .
灵感来自pyspark / tests.py我做到了
在创建SparkContext之后调用它会减少为我的测试从2647到163记录的stderr行 . 但是创建SparkContext本身会记录163,最多
我不清楚如何以编程方式调整这些 .
编辑您的conf / log4j.properties文件并更改以下行:
至
另一种方法是:
Fireup spark-shell并输入以下内容:
之后你不会看到任何日志 .
您还可以使用
sc.setLogLevel("FATAL")
在脚本中设置日志级别 . 来自docs:在Spark 2.0中,您还可以使用setLogLevel为您的应用程序动态配置它:
在pyspark控制台中,默认的
spark
会话已经可用 .这可能是由于Spark如何计算其类路径 . 我的预感是Hadoop的
log4j.properties
文件出现在类路径上的Spark之前,阻止您的更改生效 .如果你跑
然后Spark将打印用于启动shell的完整类路径;就我而言,我明白了
其中
/root/ephemeral-hdfs/conf
位于类路径的头部 .我已经打开了an issue [SPARK-2913]以在下一个版本中解决这个问题(我应该尽快修补) .
与此同时,这里有几个解决方法:
将
export SPARK_SUBMIT_CLASSPATH="$FWDIR/conf"
添加到spark-env.sh
.删除(或重命名)
/root/ephemeral-hdfs/conf/log4j.properties
.Spark 1.6.2:
Spark 2.x:
(Spark是SparkSession)
或者旧的方法,
在Spark Dir中将
conf/log4j.properties.template
重命名为conf/log4j.properties
.在
log4j.properties
中,将log4j.rootCategory=INFO, console
更改为log4j.rootCategory=WARN, console
可用的不同日志级别:
OFF(最具体,没有记录)
致命(最具体的,小数据)
错误 - 仅在出现错误时记录
警告 - 仅在出现警告或错误时记录
INFO(默认)
DEBUG - 记录详细步骤(以及上述所有日志)
TRACE(最不具体,很多数据)
ALL(最不具体,所有数据)
我在Amazon EC2上使用了1个master和2个slave以及Spark 1.2.1 .
您可以使用setLogLevel