首页 文章

如何在Spark中关闭INFO日志记录?

提问于
浏览
107

我使用AWS EC2指南安装了Spark,我可以使用 bin/pyspark 脚本启动程序以获得spark提示,并且还可以成功执行Quick Start quide .

但是,我不能为我的生活弄清楚如何在每个命令后停止所有详细的 INFO 日志记录 .

我已经在 conf 文件夹中的 log4j.properties 文件中尝试了以下代码中的几乎所有可能的场景(我注释,设置为OFF),在该文件夹中,我从每个节点启动应用程序,没有做任何事情 . 执行每个语句后,我仍然会打印日志 INFO 语句 .

我对这应该如何工作非常困惑 .

#Set everything to be logged to the console log4j.rootCategory=INFO, console                                                                        
log4j.appender.console=org.apache.log4j.ConsoleAppender 
log4j.appender.console.target=System.err     
log4j.appender.console.layout=org.apache.log4j.PatternLayout 
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n

# Settings to quiet third party logs that are too verbose
log4j.logger.org.eclipse.jetty=WARN
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO

当我使用 SPARK_PRINT_LAUNCH_COMMAND 时,这是我的完整类路径:

Spark命令:/Library/Java/JavaVirtualMachines/jdk1.8.0_05.jdk/Contents/Home/bin/java -cp:/root/spark-1.0.1-bin-hadoop2/conf:/root/spark-1.0 . 1彬hadoop2 / CONF:/root/spark-1.0.1-bin-hadoop2/lib/spark-assembly-1.0.1-hadoop2.2.0.jar:/root/spark-1.0.1-bin-hadoop2/ LIB / DataNucleus将-API JDO-3.2.1.jar:/root/spark-1.0.1-bin-hadoop2/lib/datanucleus-core-3.2.2.jar:/root/spark-1.0.1-bin- hadoop2 / lib / datanucleus-rdbms-3.2.1.jar -XX:MaxPermSize = 128m -Djava.library.path = -Xms512m -Xmx512m org.apache.spark.deploy.SparkSubmit spark-shell --class org.apache.spark .repl.Main

spark-env.sh 的内容:

#!/usr/bin/env bash

# This file is sourced when running various Spark programs.
# Copy it as spark-env.sh and edit that to configure Spark for your site.

# Options read when launching programs locally with 
# ./bin/run-example or ./bin/spark-submit
# - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
# - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node
# - SPARK_PUBLIC_DNS, to set the public dns name of the driver program
# - SPARK_CLASSPATH=/root/spark-1.0.1-bin-hadoop2/conf/

# Options read by executors and drivers running inside the cluster
# - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node
# - SPARK_PUBLIC_DNS, to set the public DNS name of the driver program
# - SPARK_CLASSPATH, default classpath entries to append
# - SPARK_LOCAL_DIRS, storage directories to use on this node for shuffle and RDD data
# - MESOS_NATIVE_LIBRARY, to point to your libmesos.so if you use Mesos

# Options read in YARN client mode
# - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
# - SPARK_EXECUTOR_INSTANCES, Number of workers to start (Default: 2)
# - SPARK_EXECUTOR_CORES, Number of cores for the workers (Default: 1).
# - SPARK_EXECUTOR_MEMORY, Memory per Worker (e.g. 1000M, 2G) (Default: 1G)
# - SPARK_DRIVER_MEMORY, Memory for Master (e.g. 1000M, 2G) (Default: 512 Mb)
# - SPARK_YARN_APP_NAME, The name of your application (Default: Spark)
# - SPARK_YARN_QUEUE, The hadoop queue to use for allocation requests (Default: ‘default’)
# - SPARK_YARN_DIST_FILES, Comma separated list of files to be distributed with the job.
# - SPARK_YARN_DIST_ARCHIVES, Comma separated list of archives to be distributed with the job.

# Options for the daemons used in the standalone deploy mode:
# - SPARK_MASTER_IP, to bind the master to a different IP address or hostname
# - SPARK_MASTER_PORT / SPARK_MASTER_WEBUI_PORT, to use non-default ports for the master
# - SPARK_MASTER_OPTS, to set config properties only for the master (e.g. "-Dx=y")
# - SPARK_WORKER_CORES, to set the number of cores to use on this machine
# - SPARK_WORKER_MEMORY, to set how much total memory workers have to give executors (e.g. 1000m, 2g)
# - SPARK_WORKER_PORT / SPARK_WORKER_WEBUI_PORT, to use non-default ports for the worker
# - SPARK_WORKER_INSTANCES, to set the number of worker processes per node
# - SPARK_WORKER_DIR, to set the working directory of worker processes
# - SPARK_WORKER_OPTS, to set config properties only for the worker (e.g. "-Dx=y")
# - SPARK_HISTORY_OPTS, to set config properties only for the history server (e.g. "-Dx=y")
# - SPARK_DAEMON_JAVA_OPTS, to set config properties for all daemons (e.g. "-Dx=y")
# - SPARK_PUBLIC_DNS, to set the public dns name of the master or workers

export SPARK_SUBMIT_CLASSPATH="$FWDIR/conf"

13 回答

  • 31

    我这样做的方式是:

    在我运行 spark-submit 脚本的位置

    $ cp /etc/spark/conf/log4j.properties .
    $ nano log4j.properties
    

    INFO 更改为您想要的任何级别的日志记录,然后运行 spark-submit

  • 5

    我想继续使用日志记录(Python的日志记录工具),您可以尝试拆分应用程序和Spark的配置:

    LoggerManager()
    logger = logging.getLogger(__name__)
    loggerSpark = logging.getLogger('py4j')
    loggerSpark.setLevel('WARNING')
    
  • 44

    只需在您的spark-submit命令中添加以下参数即可

    --conf "spark.driver.extraJavaOptions=-Dlog4jspark.root.logger=WARN,console"
    

    这仅暂时覆盖该作业的系统值 . 检查确切的属性名称(log4jspark.root.logger)来自log4j.properties文件 .

    希望这会有所帮助,欢呼!

  • 2

    只需在spark目录中执行以下命令:

    cp conf/log4j.properties.template conf/log4j.properties
    

    编辑log4j.properties:

    # Set everything to be logged to the console
    log4j.rootCategory=INFO, console
    log4j.appender.console=org.apache.log4j.ConsoleAppender
    log4j.appender.console.target=System.err
    log4j.appender.console.layout=org.apache.log4j.PatternLayout
    log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n
    
    # Settings to quiet third party logs that are too verbose
    log4j.logger.org.eclipse.jetty=WARN
    log4j.logger.org.eclipse.jetty.util.component.AbstractLifeCycle=ERROR
    log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO
    log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO
    

    在第一行替换:

    log4j.rootCategory=INFO, console
    

    通过:

    log4j.rootCategory=WARN, console
    

    保存并重新启动shell . 它适用于OS X上的Spark 1.1.0和Spark 1.5.1 .

  • 0

    灵感来自pyspark / tests.py我做到了

    def quiet_logs( sc ):
      logger = sc._jvm.org.apache.log4j
      logger.LogManager.getLogger("org"). setLevel( logger.Level.ERROR )
      logger.LogManager.getLogger("akka").setLevel( logger.Level.ERROR )
    

    在创建SparkContext之后调用它会减少为我的测试从2647到163记录的stderr行 . 但是创建SparkContext本身会记录163,最多

    15/08/25 10:14:16 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
    

    我不清楚如何以编程方式调整这些 .

  • 6

    编辑您的conf / log4j.properties文件并更改以下行:

    log4j.rootCategory=INFO, console
    

    log4j.rootCategory=ERROR, console
    

    另一种方法是:

    Fireup spark-shell并输入以下内容:

    import org.apache.log4j.Logger
    import org.apache.log4j.Level
    
    Logger.getLogger("org").setLevel(Level.OFF)
    Logger.getLogger("akka").setLevel(Level.OFF)
    

    之后你不会看到任何日志 .

  • 130
    >>> log4j = sc._jvm.org.apache.log4j
    >>> log4j.LogManager.getRootLogger().setLevel(log4j.Level.ERROR)
    
  • 32

    您还可以使用 sc.setLogLevel("FATAL") 在脚本中设置日志级别 . 来自docs

    控制我们的logLevel . 这将覆盖任何用户定义的日志设置 . 有效的日志级别包括:ALL,DEBUG,ERROR,FATAL,INFO,OFF,TRACE,WARN

  • 1

    在Spark 2.0中,您还可以使用setLogLevel为您的应用程序动态配置它:

    from pyspark.sql import SparkSession
        spark = SparkSession.builder.\
            master('local').\
            appName('foo').\
            getOrCreate()
        spark.sparkContext.setLogLevel('WARN')
    

    在pyspark控制台中,默认的 spark 会话已经可用 .

  • 0

    这可能是由于Spark如何计算其类路径 . 我的预感是Hadoop的 log4j.properties 文件出现在类路径上的Spark之前,阻止您的更改生效 .

    如果你跑

    SPARK_PRINT_LAUNCH_COMMAND=1 bin/spark-shell
    

    然后Spark将打印用于启动shell的完整类路径;就我而言,我明白了

    Spark Command: /usr/lib/jvm/java/bin/java -cp :::/root/ephemeral-hdfs/conf:/root/spark/conf:/root/spark/lib/spark-assembly-1.0.0-hadoop1.0.4.jar:/root/spark/lib/datanucleus-api-jdo-3.2.1.jar:/root/spark/lib/datanucleus-core-3.2.2.jar:/root/spark/lib/datanucleus-rdbms-3.2.1.jar -XX:MaxPermSize=128m -Djava.library.path=:/root/ephemeral-hdfs/lib/native/ -Xms512m -Xmx512m org.apache.spark.deploy.SparkSubmit spark-shell --class org.apache.spark.repl.Main
    

    其中 /root/ephemeral-hdfs/conf 位于类路径的头部 .

    我已经打开了an issue [SPARK-2913]以在下一个版本中解决这个问题(我应该尽快修补) .

    与此同时,这里有几个解决方法:

    • export SPARK_SUBMIT_CLASSPATH="$FWDIR/conf" 添加到 spark-env.sh .

    • 删除(或重命名) /root/ephemeral-hdfs/conf/log4j.properties .

  • 17

    Spark 1.6.2:

    log4j = sc._jvm.org.apache.log4j
    log4j.LogManager.getRootLogger().setLevel(log4j.Level.ERROR)
    

    Spark 2.x:

    spark.sparkContext.setLogLevel('WARN')
    

    (Spark是SparkSession)

    或者旧的方法,

    在Spark Dir中将 conf/log4j.properties.template 重命名为 conf/log4j.properties .

    log4j.properties 中,将 log4j.rootCategory=INFO, console 更改为 log4j.rootCategory=WARN, console

    可用的不同日志级别:

    • OFF(最具体,没有记录)

    • 致命(最具体的,小数据)

    • 错误 - 仅在出现错误时记录

    • 警告 - 仅在出现警告或错误时记录

    • INFO(默认)

    • DEBUG - 记录详细步骤(以及上述所有日志)

    • TRACE(最不具体,很多数据)

    • ALL(最不具体,所有数据)

  • 13

    我在Amazon EC2上使用了1个master和2个slave以及Spark 1.2.1 .

    # Step 1. Change config file on the master node
    nano /root/ephemeral-hdfs/conf/log4j.properties
    
    # Before
    hadoop.root.logger=INFO,console
    # After
    hadoop.root.logger=WARN,console
    
    # Step 2. Replicate this change to slaves
    ~/spark-ec2/copy-dir /root/ephemeral-hdfs/conf/
    
  • 23

    您可以使用setLogLevel

    val spark = SparkSession
          .builder()
          .config("spark.master", "local[1]")
          .appName("TestLog")
          .getOrCreate()
    
    spark.sparkContext.setLogLevel("WARN")
    

相关问题