首页 文章

我可以在集群部署模式下运行pyspark jupyter笔记本吗?

提问于
浏览
3

Context: 群集配置如下:

  • 所有东西都在运行docker文件 .

  • node1:spark master

  • node2:jupyter hub(我也运行我的笔记本)

  • node3-7:spark worker节点

  • 我可以使用默认的spark端口从我的工作节点telnet并ping到node2,反之亦然

Problem: 我试图让驱动程序在不是运行jupyter笔记本的节点的节点上运行 . 现在我可以在集群上运行作业,但只能在node2上运行驱动程序 .

经过大量的挖掘后,我发现这个stackoverflow post声称如果你运行一个带有spark的交互式shell,你只能在本地部署模式下运行(驱动程序位于你正在处理的机器上) . 该帖子继续说像jupyter hub这样的结果也不会在集群部署模式下工作,但是我找不到任何可以证实这一点的文档 . 有人可以确认jupyter hub是否可以在群集模式下运行?

我尝试在集群部署模式下运行spark会话:

from pyspark.sql import SparkSession

spark = SparkSession.builder\
.enableHiveSupport()\
.config("spark.local.ip",<node 3 ip>)\
.config("spark.driver.host",<node 3 ip>)\
.config('spark.submit.deployMode','cluster')\
.getOrCreate()

Error:

/usr/spark/python/pyspark/sql/session.py in getOrCreate(self)
    167                     for key, value in self._options.items():
    168                         sparkConf.set(key, value)
--> 169                     sc = SparkContext.getOrCreate(sparkConf)
    170                     # This SparkContext may be an existing one.
    171                     for key, value in self._options.items():

/usr/spark/python/pyspark/context.py in getOrCreate(cls, conf)
    308         with SparkContext._lock:
    309             if SparkContext._active_spark_context is None:
--> 310                 SparkContext(conf=conf or SparkConf())
    311             return SparkContext._active_spark_context
    312 

/usr/spark/python/pyspark/context.py in __init__(self, master, appName, sparkHome, pyFiles, environment, batchSize, serializer, conf, gateway, jsc, profiler_cls)
    113         """
    114         self._callsite = first_spark_call() or CallSite(None, None, None)
--> 115         SparkContext._ensure_initialized(self, gateway=gateway, conf=conf)
    116         try:
    117             self._do_init(master, appName, sparkHome, pyFiles, environment, batchSize, serializer,

/usr/spark/python/pyspark/context.py in _ensure_initialized(cls, instance, gateway, conf)
    257         with SparkContext._lock:
    258             if not SparkContext._gateway:
--> 259                 SparkContext._gateway = gateway or launch_gateway(conf)
    260                 SparkContext._jvm = SparkContext._gateway.jvm
    261 

/usr/spark/python/pyspark/java_gateway.py in launch_gateway(conf)
     93                 callback_socket.close()
     94         if gateway_port is None:
---> 95             raise Exception("Java gateway process exited before     sending the driver its port number")
     96 
     97         # In Windows, ensure the Java child processes do not linger after Python has exited.

Exception: Java gateway process exited before sending the driver its port number

2 回答

  • 1

    You cannot use cluster mode with PySpark at all

    目前,独立模式不支持Python应用程序的集群模式 .

    即使你可以cluster mode is not applicable in interactive environment

    case (_, CLUSTER) if isShell(args.primaryResource) =>
      error("Cluster deploy mode is not applicable to Spark shells.")
    case (_, CLUSTER) if isSqlShell(args.mainClass) =>
      error("Cluster deploy mode is not applicable to Spark SQL shell.")
    
  • 0

    我不是PySpark的专家,但你是否尝试更改pyspark jupyter内核的kernel.json文件?

    也许您可以在其中添加选项部署模式群集

    "env": {
      "SPARK_HOME": "/your_dir/spark",
      "PYTHONPATH": "/your_dir/spark/python:/your_dir/spark/python/lib/py4j-0.9-src.zip",
      "PYTHONSTARTUP": "/your_dir/spark/python/pyspark/shell.py",
      "PYSPARK_SUBMIT_ARGS": "--master local[*] pyspark-shell"
     }
    

    你改变了这一行:

    "PYSPARK_SUBMIT_ARGS": "--master local[*] pyspark-shell"
    

    与您的群集主IP和 --deploy-mode cluster

    不确定会改变什么,但也许是有效的,我也很想知道!

    祝好运

    编辑:我发现这也许可以帮助你,即使它是从2015年

    link jupyter pyspark cluster

相关问题