From f8fae2e9de88976bc7f9030cc7fca6fa6e05a32b Mon Sep 17 00:00:00 2001 From: 游雁 <zhifu.gzf@alibaba-inc.com> Date: 星期四, 02 三月 2023 20:50:46 +0800 Subject: [PATCH] torchscripts --- funasr/runtime/triton_gpu/README.md | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/funasr/runtime/triton_gpu/README.md b/funasr/runtime/triton_gpu/README.md index ebaa819..daceb4e 100644 --- a/funasr/runtime/triton_gpu/README.md +++ b/funasr/runtime/triton_gpu/README.md @@ -1,7 +1,7 @@ ## Inference with Triton ### Steps: -1. Refer here to [get model.onnx](https://github.com/alibaba-damo-academy/FunASR/tree/main/funasr/runtime/python/onnxruntime#steps) +1. Refer here to [get model.onnx](https://github.com/alibaba-damo-academy/FunASR/blob/main/funasr/export/README.md) 2. Follow below instructions to using triton ```sh @@ -49,4 +49,4 @@ | 60 (onnx fp32) | 116.0 | 0.0032| ## Acknowledge -This part originates from NVIDIA CISI project. We also have TTS and NLP solutions deployed on triton inference server. If you are interested, please contact us. \ No newline at end of file +This part originates from NVIDIA CISI project. We also have TTS and NLP solutions deployed on triton inference server. If you are interested, please contact us. -- Gitblit v1.9.1