首页 文章

TensorFlow Server在客户端超时内关闭连接

提问于
浏览
0

我们使用TensorFlow Serving加载模型并实现Java gRPC客户端 .

正常,它适用于小数据 . 但是如果我们请求更大的批量大小并且数据几乎是1~2M,则服务器会关闭连接并快速抛出内部错误 .

我们还在https://github.com/tensorflow/serving/issues/284中打开了一个跟踪此问题的问题 .

Job aborted due to stage failure: Task 47 in stage 7.0 failed 4 times, most recent failure: Lost task 47.3 in stage 7.0 (TID 5349, xxx)
io.grpc.StatusRuntimeException: INTERNAL: HTTP/2 error code: INTERNAL_ERROR
Received Rst Stream
at io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:230)
at io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:211)
at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:144)
at tensorflow.serving.PredictionServiceGrpc$PredictionServiceBlockingStub.predict(PredictionServiceGrpc.java:160)

......

at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:189)
at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:64)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:91)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:219)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:

1 回答

  • 1

    above issue中可以看出,这是由于消息超过了默认的最大消息大小4 MiB . 较大消息的接收者需要明确允许较大的消息,或者发送者发送较小的消息 .

    gRPC适用于较大的消息(甚至100s MB),但应用程序通常不是 . 最大消息大小适用于仅在准备接受它们的应用程序中允许“大”消息 .

相关问题