首页 文章

结构化流错误py4j.protocol.Py4JNetworkError:来自Java端的答案为空

提问于
浏览
1

我正在尝试使用PySpark和Structured Streaming(Spark 2.3)在两个Kafka Stream之间 Build 左外连接 .

import os
import time

from pyspark.sql.types import *
from pyspark.sql.functions import from_json, col, struct, explode, get_json_object
from ast import literal_eval
from pyspark.sql import SparkSession
from pyspark.sql.functions import expr

os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.3.0 pyspark-shell'

spark = SparkSession \
    .builder \
    .appName("Spark Kafka Structured Streaming") \
    .getOrCreate()

schema_impressions = StructType() \
    .add("id_req", StringType()) \
    .add("ts_imp_request", TimestampType()) \
    .add("country", StringType()) \
    .add("TS_IMPRESSION", TimestampType()) 

schema_requests = StructType() \
    .add("id_req", StringType()) \
    .add("page", StringType()) \
    .add("conntype", StringType()) \
    .add("TS_REQUEST", TimestampType()) 

impressions = spark.readStream \
  .format("kafka") \
  .option("kafka.bootstrap.servers", "ip-ec2.internal:9092") \
  .option("subscribe", "ssp.datascience_impressions") \
  .load()

requests = spark \
  .readStream \
  .format("kafka") \
  .option("kafka.bootstrap.servers", "ip-ec2.internal:9092") \
  .option("subscribe", "ssp.datascience_requests") \
  .option("startingOffsets", "latest") \
  .load()

query_requests = requests \
        .select(col("timestamp"), col("key").cast("string"), from_json(col("value").cast("string"), schema_requests).alias("parsed")) \
        .select(col("timestamp").alias("timestamp_req"), "parsed.id_req", "parsed.page", "parsed.conntype", "parsed.TS_REQUEST") \
        .withWatermark("timestamp_req", "120 seconds") 

query_impressions = impressions \
        .select(col("timestamp"), col("key").cast("string"), from_json(col("value").cast("string"), schema_impressions).alias("parsed")) \
        .select(col("timestamp").alias("timestamp_imp"), col("parsed.id_req").alias("id_imp"), "parsed.ts_imp_request", "parsed.country", "parsed.TS_IMPRESSION") \
        .withWatermark("timestamp_imp", "120 seconds") 

query_requests.printSchema()        
query_impressions.printSchema()

> root  
|-- timestamp_req: timestamp (nullable = true)  
|-- id_req: string (nullable = true)  
|-- page: string (nullable = true)  
|-- conntype: string (nullable = true)  
|-- TS_REQUEST: timestamp (nullable = true)
> 
> root  |-- timestamp_imp: timestamp (nullable = true)  
|-- id_imp: string (nullable = true)  
|-- ts_imp_request: timestamp (nullable = true)  
|-- country: string (nullable = true)  
|-- TS_IMPRESSION: timestamp (nullable = true)

在简历中,我将从两个Kafka Streams获取数据,在接下来的行中,我将尝试使用ID进行连接 .

rawQuery = query_requests.join(query_impressions,  expr(""" 
    (id_req = id_imp AND 
    timestamp_imp >= timestamp_req AND 
    timestamp_imp <= timestamp_req + interval 5 minutes) 
    """), 
  "leftOuter")

rawQuery = rawQuery \
        .writeStream \
        .format("parquet") \
        .option("checkpointLocation", "/home/jovyan/streaming/applicationHistory") \
        .option("path", "/home/jovyan/streaming").start()
print(rawQuery.status)

{'message':'处理新数据','isDataAvailable':True,'isTriggerActive':True}错误:root:发送命令时出现异常 . 回溯(最近一次调用最后一次):文件“/opt/conda/lib/python3.6/site-packages/py4j/java_gateway.py”,第1062行,在send_command中引发Py4JNetworkError(“来自Java端的答案为空”)py4j .protocol.Py4JNetworkError:来自Java端的答案为空在处理上述异常时,发生了另一个异常:Traceback(最近一次调用最后一次):文件“/opt/conda/lib/python3.6/site-packages/py4j/java_gateway .py“,第908行,在send_command响应= connection.send_command(命令)文件”/opt/conda/lib/python3.6/site-packages/py4j/java_gateway.py“,第1067行,在send_command中”接收时出错“,e,proto.ERROR_ON_RECEIVE)py4j.protocol.Py4JNetworkError:接收错误时出错:py4j.java_gateway:尝试连接到Java服务器时发生错误(127.0.0.1:33968)回溯(最近一次调用最后一次):文件“/opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py”,第2910行,在run_code exec中(code_obj,self.user_global_ns,self.user_ns)文件“”,第3行, in print(rawQuery.status)File“/opt/conda/lib/python3.6/site-packages/pyspark/sql/streaming.py”,第114行,状态返回json.loads(self._jsq.status() .json())文件“/opt/conda/lib/python3.6/site-packages/py4j/java_gateway.py”,第1160行,在通话答案中,self.gateway_client,self.target_id,self.name)文件“ /opt/conda/lib/python3.6/site-packages/pyspark/sql/utils.py“,第63行,在deco返回f(* a,** kw)文件”/ opt / conda / lib / python3 . 6 / site-packages / py4j / protocol.py“,第328行,采用get_return_value格式(target_id,” . ,name))py4j.protocol.Py4JError:调用o92.status时发生错误在处理上述异常时,另一个异常发生:Traceback(最近一次调用最后一次):文件“/opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py”,第1828行,showtraceback stb = value.render_traceback() AttributeError:'Py4JError'对象没有属性'render_traceback'在处理上述异常期间,发生了另一个异常:Traceback(mo st最近调用last):文件“/opt/conda/lib/python3.6/site-packages/py4j/java_gateway.py”,第852行,在_get_connection connection = self.deque.pop()中IndexError:从空中弹出双端队列

我正在使用Jupyter Notebook在本地运行Spark . 在spark / conf / spark-defaults.conf 我有:

# Example:
# spark.master                     spark://master:7077
# spark.eventLog.enabled           true
# spark.eventLog.dir               hdfs://namenode:8021/directory
# spark.serializer                 org.apache.spark.serializer.KryoSerializer
spark.driver.memory             15g
# spark.executor.extraJavaOptions  -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"

如果我在上一次错误后尝试使用Spark,则收到该错误:

错误:root:发送命令时出现异常 . 回溯(最近一次调用最后一次):文件“/opt/conda/lib/python3.6/site-packages/py4j/java_gateway.py”,第1062行,在send_command中引发Py4JNetworkError(“来自Java端的答案为空”)py4j .protocol.Py4JNetworkError:来自Java端的答案为空在处理上述异常时,发生了另一个异常:Traceback(最近一次调用最后一次):文件“/opt/conda/lib/python3.6/site-packages/py4j/java_gateway .py“,第908行,在send_command响应= connection.send_command(命令)文件”/opt/conda/lib/python3.6/site-packages/py4j/java_gateway.py“,第1067行,在send_command中”接收时出错“,e,proto.ERROR_ON_RECEIVE)py4j.protocol.Py4JNetworkError:接收时出错

1 回答

  • 1

    我解决了这个问题!基本上,由于某些原因,问题与Jupyter Notebook有关 . 我删除了上一代码的下一行:

    os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.3.0 pyspark-shell'
    

    我使用控制台运行代码:

    > spark-submit --packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.3.0 spark_structured.py
    

    通过这种方式,我可以毫无问题地运行所有代码 .

    如果您遇到同样的问题,您也可以更改 spark-default.conf 并增加 spark.driver.memoryspark.executor.memory

相关问题