From 94cb66dbb9ae12e044a41fb8a3d84e1835ee7e7b Mon Sep 17 00:00:00 2001 From: zhifu gao <zhifu.gzf@alibaba-inc.com> Date: 星期四, 02 三月 2023 20:20:10 +0800 Subject: [PATCH] Merge pull request #177 from alibaba-damo-academy/dev_timestamp --- funasr/runtime/triton_gpu/README.md | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/funasr/runtime/triton_gpu/README.md b/funasr/runtime/triton_gpu/README.md index ebaa819..daceb4e 100644 --- a/funasr/runtime/triton_gpu/README.md +++ b/funasr/runtime/triton_gpu/README.md @@ -1,7 +1,7 @@ ## Inference with Triton ### Steps: -1. Refer here to [get model.onnx](https://github.com/alibaba-damo-academy/FunASR/tree/main/funasr/runtime/python/onnxruntime#steps) +1. Refer here to [get model.onnx](https://github.com/alibaba-damo-academy/FunASR/blob/main/funasr/export/README.md) 2. Follow below instructions to using triton ```sh @@ -49,4 +49,4 @@ | 60 (onnx fp32) | 116.0 | 0.0032| ## Acknowledge -This part originates from NVIDIA CISI project. We also have TTS and NLP solutions deployed on triton inference server. If you are interested, please contact us. \ No newline at end of file +This part originates from NVIDIA CISI project. We also have TTS and NLP solutions deployed on triton inference server. If you are interested, please contact us. -- Gitblit v1.9.1