我正在尝试将deeplab-v3模型转换为TF-Lite,我从mobilenetv2_coco_voc_trainaug下载MobileNet-v2的预训练模态 .

使用以下命令覆盖模型:

bazel run --config=opt \
  //tensorflow/contrib/lite/toco:toco -- \
  --input_file=/tmp/frozen_inference_graph.pb \
  --output_file=/tmp/optimized_graph.tflite \
  --input_format=TENSORFLOW_GRAPHDEF \
  --output_format=TFLITE \
  --input_type=QUANTIZED_UINT8 \
  --input_arrays=ImageTensor \
  --output_arrays=SemanticPredictions \
  --input_shapes=1,513,513,3 \
  --allow_custom_ops

它已成功覆盖到tflite模型,然后我放到android资源文件夹并加载到Android应用程序,我在gradle中设置如下:

aaptOptions {
    noCompress "tflite"
    noCompress "lite"
}

使用以下函数加载模型:

/** Memory-map the model file in Assets. */
  private MappedByteBuffer loadModelFile(Activity activity) throws IOException {
    AssetFileDescriptor fileDescriptor = activity.getAssets().openFd(MODEL_PATH);
    FileInputStream inputStream = new FileInputStream(fileDescriptor.getFileDescriptor());
    FileChannel fileChannel = inputStream.getChannel();
    long startOffset = fileDescriptor.getStartOffset();
    long declaredLength = fileDescriptor.getDeclaredLength();
    return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength);
  }

当调用 tflite = new Interpreter(loadModelFile(activity)); 问题发生时,异常显示“ MappedByteBuffer is not a valid flatbuffer model ”任何人都可以帮助弄清楚我做错了什么过程? toco工具中有错误吗?