From 26bfb15e47e82788f12d2589acef6906c9763122 Mon Sep 17 00:00:00 2001
From: zhifu gao <zhifu.gzf@alibaba-inc.com>
Date: 星期一, 27 二月 2023 19:11:59 +0800
Subject: [PATCH] Update README.md

---
 funasr/runtime/triton_gpu/README.md |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/funasr/runtime/triton_gpu/README.md b/funasr/runtime/triton_gpu/README.md
index ebaa819..daceb4e 100644
--- a/funasr/runtime/triton_gpu/README.md
+++ b/funasr/runtime/triton_gpu/README.md
@@ -1,7 +1,7 @@
 ## Inference with Triton 
 
 ### Steps:
-1. Refer here to [get model.onnx](https://github.com/alibaba-damo-academy/FunASR/tree/main/funasr/runtime/python/onnxruntime#steps)
+1. Refer here to [get model.onnx](https://github.com/alibaba-damo-academy/FunASR/blob/main/funasr/export/README.md)
 
 2. Follow below instructions to using triton
 ```sh
@@ -49,4 +49,4 @@
 | 60 (onnx fp32)                | 116.0 | 0.0032|
 
 ## Acknowledge
-This part originates from NVIDIA CISI project. We also have TTS and NLP solutions deployed on triton inference server. If you are interested, please contact us.
\ No newline at end of file
+This part originates from NVIDIA CISI project. We also have TTS and NLP solutions deployed on triton inference server. If you are interested, please contact us.

--
Gitblit v1.9.1