From 799249154b736cf38a4dee4a95c622d72bdd13f4 Mon Sep 17 00:00:00 2001
From: 雾聪 <wucong.lyb@alibaba-inc.com>
Date: 星期一, 21 八月 2023 13:05:32 +0800
Subject: [PATCH] update docs

---
 funasr/runtime/docs/SDK_tutorial_online.md           |    4 
 funasr/runtime/docs/SDK_advanced_guide_offline.md    |   66 ++--------------------
 funasr/runtime/docs/SDK_advanced_guide_offline_zh.md |   59 ++-----------------
 3 files changed, 15 insertions(+), 114 deletions(-)

diff --git a/funasr/runtime/docs/SDK_advanced_guide_offline.md b/funasr/runtime/docs/SDK_advanced_guide_offline.md
index 2bf9f31..7b06861 100644
--- a/funasr/runtime/docs/SDK_advanced_guide_offline.md
+++ b/funasr/runtime/docs/SDK_advanced_guide_offline.md
@@ -116,59 +116,14 @@
   --keyfile ../../../ssl_key/server.key
  ```
 
+After executing the above command, the real-time speech transcription service will be started. If the model is specified as a ModelScope model id, the following models will be automatically downloaded from ModelScope:
+[FSMN-VAD](https://www.modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-onnx/summary)
+[Paraformer-lagre](https://www.modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-onnx/summary)
+[CT-Transformer](https://www.modelscope.cn/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-onnx/summary)
 
-## Preparing Model Resources
-
-If you choose to download models from Modelscope through the FunASR-wss-server, you can skip this step. The vad, asr, and punc model resources in the offline file transcription service of FunASR are all from Modelscope. The model addresses are shown in the table below:
-
-| Model | Modelscope url                                                                                                   |
-|-------|------------------------------------------------------------------------------------------------------------------|
-| VAD   | https://www.modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary |
-| ASR   | https://www.modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/summary                           |
-| PUNC  | https://www.modelscope.cn/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch/summary               |
-
-The offline file transcription service deploys quantized ONNX models. Below are instructions on how to export ONNX models and their quantization. You can choose to export ONNX models from Modelscope, local files, or finetuned resources: 
-
-### Exporting ONNX models from Modelscope
-
-Download the corresponding model with the given model name from the Modelscope website, and then export the quantized ONNX model
-
-```shell
-python -m funasr.export.export_model \
---export-dir ./export \
---type onnx \
---quantize True \
---model-name damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch \
---model-name damo/speech_fsmn_vad_zh-cn-16k-common-pytorch \
---model-name damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch
-```
-
-Introduction to command parameters:
-
-```text
---model-name: The name of the model on Modelscope, for example: damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch
---export-dir: The export directory of ONNX model.
---type: Model type, currently supports ONNX and torch.
---quantize: Quantize the int8 model.
-```
-
-### Exporting ONNX models from local files
-
-Set the model name to the local path of the model, and export the quantized ONNX model:
-
-```shell
-python -m funasr.export.export_model --model-name /workspace/models/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch --export-dir ./export --type onnx --quantize True
-```
+If you wish to deploy your fine-tuned model (e.g., 10epoch.pb), you need to manually rename the model to model.pb and replace the original model.pb in ModelScope. Then, specify the path as `model_dir`.
 
 
-### Exporting models from finetuned resources
-
-If you want to deploy a finetuned model, you can follow these steps:
-Rename the model you want to deploy after finetuning (for example, 10epoch.pb) to model.pb, and replace the original model.pb in Modelscope with this one. If the path of the replaced model is /path/to/finetune/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch, use the following command to convert the finetuned model to an ONNX model: 
-
-```shell
-python -m funasr.export.export_model --model-name /path/to/finetune/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch --export-dir ./export --type onnx --quantize True
-```
 
 ## Starting the client
 
@@ -210,16 +165,7 @@
 
 ### Custom client
 
-If you want to define your own client, the Websocket communication protocol is as follows:
-
-```text
-# First communication
-{"mode": "offline", "wav_name": wav_name, "is_speaking": True, "hotwords": "hotword1|hotword2"}
-# Send wav data
-Bytes data
-# Send end flag
-{"is_speaking": False}
-```
+If you want to define your own client, see the [Websocket communication protocol](./websocket_protocol.md)
 
 ## How to customize service deployment
 
diff --git a/funasr/runtime/docs/SDK_advanced_guide_offline_zh.md b/funasr/runtime/docs/SDK_advanced_guide_offline_zh.md
index 513049c..1a70a7e 100644
--- a/funasr/runtime/docs/SDK_advanced_guide_offline_zh.md
+++ b/funasr/runtime/docs/SDK_advanced_guide_offline_zh.md
@@ -191,51 +191,12 @@
 --keyfile  ssl鐨勫瘑閽ユ枃浠讹紝榛樿涓猴細../../../ssl_key/server.key锛屽鏋滈渶瑕佸叧闂璼sl锛屽弬鏁拌缃负鈥濃��
 ```
 
-## 妯″瀷璧勬簮鍑嗗
+鎵ц涓婅堪鎸囦护鍚庯紝鍚姩绂荤嚎鏂囦欢杞啓鏈嶅姟銆傚鏋滄ā鍨嬫寚瀹氫负ModelScope涓璵odel id锛屼細鑷姩浠嶮oldeScope涓笅杞藉涓嬫ā鍨嬶細
+[FSMN-VAD妯″瀷](https://www.modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-onnx/summary)锛�
+[Paraformer-lagre妯″瀷](https://www.modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-onnx/summary)
+[CT-Transformer鏍囩偣棰勬祴妯″瀷](https://www.modelscope.cn/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-onnx/summary)
 
-濡傛灉鎮ㄩ�夋嫨閫氳繃funasr-wss-server浠嶮odelscope涓嬭浇妯″瀷锛屽彲浠ヨ烦杩囨湰姝ラ銆�
-
-FunASR绂荤嚎鏂囦欢杞啓鏈嶅姟涓殑vad銆乤sr鍜宲unc妯″瀷璧勬簮鍧囨潵鑷狹odelscope锛屾ā鍨嬪湴鍧�璇﹁涓嬭〃锛�
-
-| 妯″瀷 | Modelscope閾炬帴                                                                                                  |
-|------|---------------------------------------------------------------------------------------------------------------|
-| VAD  | https://www.modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-onnx/summary |
-| ASR  | https://www.modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-onnx/summary                           |
-| PUNC | https://www.modelscope.cn/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-onnx/summary               |
-
-绂荤嚎鏂囦欢杞啓鏈嶅姟涓儴缃茬殑鏄噺鍖栧悗鐨凮NNX妯″瀷锛屼笅闈粙缁嶄笅濡備綍瀵煎嚭ONNX妯″瀷鍙婂叾閲忓寲锛氭偍鍙互閫夋嫨浠嶮odelscope瀵煎嚭ONNX妯″瀷銆佷粠finetune鍚庣殑璧勬簮瀵煎嚭妯″瀷锛�
-
-### 浠嶮odelscope瀵煎嚭ONNX妯″瀷
-
-浠嶮odelscope缃戠珯涓嬭浇瀵瑰簲model name鐨勬ā鍨嬶紝鐒跺悗瀵煎嚭閲忓寲鍚庣殑ONNX妯″瀷锛�
-
-```shell
-python -m funasr.export.export_model \
---export-dir ./export \
---type onnx \
---quantize True \
---model-name damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch \
---model-name damo/speech_fsmn_vad_zh-cn-16k-common-pytorch \
---model-name damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch
-```
-
-鍛戒护鍙傛暟浠嬬粛锛�
-```text
---model-name  Modelscope涓婄殑妯″瀷鍚嶇О锛屼緥濡俤amo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch
---export-dir  ONNX妯″瀷瀵煎嚭鍦板潃
---type 妯″瀷绫诲瀷锛岀洰鍓嶆敮鎸� ONNX銆乼orch
---quantize  int8妯″瀷閲忓寲
-```
-### 浠巉inetune鍚庣殑璧勬簮瀵煎嚭妯″瀷
-
-鍋囧鎮ㄦ兂閮ㄧ讲finetune鍚庣殑妯″瀷锛屽彲浠ュ弬鑰冨涓嬫楠わ細
-
-灏嗘偍finetune鍚庨渶瑕侀儴缃茬殑妯″瀷锛堜緥濡�10epoch.pb锛夛紝閲嶅懡鍚嶄负model.pb锛屽苟灏嗗師modelscope涓ā鍨媘odel.pb鏇挎崲鎺夛紝鍋囧鏇挎崲鍚庣殑妯″瀷璺緞涓�/path/to/finetune/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch锛岄�氳繃涓嬭堪鍛戒护鎶奻inetune鍚庣殑妯″瀷杞垚onnx妯″瀷锛�
-
-```shell
-python -m funasr.export.export_model --model-name /path/to/finetune/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch --export-dir ./export --type onnx --quantize True
-```
-
+濡傛灉锛屾偍甯屾湜閮ㄧ讲鎮╢inetune鍚庣殑妯″瀷锛堜緥濡�10epoch.pb锛夛紝闇�瑕佹墜鍔ㄥ皢妯″瀷閲嶅懡鍚嶄负model.pb锛屽苟灏嗗師modelscope涓ā鍨媘odel.pb鏇挎崲鎺夛紝灏嗚矾寰勬寚瀹氫负`model_dir`鍗冲彲銆�
 
 
 ## 濡備綍瀹氬埗鏈嶅姟閮ㄧ讲
@@ -251,15 +212,9 @@
 
 ### 鑷畾涔夊鎴风锛�
 
-濡傛灉鎮ㄦ兂瀹氫箟鑷繁鐨刢lient锛寃ebsocket閫氫俊鍗忚涓猴細
+濡傛灉鎮ㄦ兂瀹氫箟鑷繁鐨刢lient锛屽弬鑰僛websocket閫氫俊鍗忚](./websocket_protocol_zh.md)
 
-```text
-# 棣栨閫氫俊
-{"mode": "offline", "wav_name": wav_name, "is_speaking": True}
-# 鍙戦�亀av鏁版嵁
-bytes鏁版嵁
-# 鍙戦�佺粨鏉熸爣蹇�
-{"is_speaking": False}
+
 ```
 
 ### c++ 鏈嶅姟绔細
diff --git a/funasr/runtime/docs/SDK_tutorial_online.md b/funasr/runtime/docs/SDK_tutorial_online.md
index bc02176..80cc2b9 100644
--- a/funasr/runtime/docs/SDK_tutorial_online.md
+++ b/funasr/runtime/docs/SDK_tutorial_online.md
@@ -24,9 +24,9 @@
 Download the deployment tool `funasr-runtime-deploy-online-cpu-zh.sh`
 
 ```shell
-curl -O https://raw.githubusercontent.com/alibaba-damo-academy/FunASR/main/funasr/runtime/deploy_tools/funasr-runtime-deploy-online-cpu-en.sh;
+curl -O https://raw.githubusercontent.com/alibaba-damo-academy/FunASR/main/funasr/runtime/deploy_tools/funasr-runtime-deploy-online-cpu-zh.sh;
 # If there is a network problem, users in mainland China can use the following command:
-# curl -O https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/shell/funasr-runtime-deploy-online-cpu-en.sh;
+# curl -O https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/shell/funasr-runtime-deploy-online-cpu-zh.sh;
 ```
 
 Execute the deployment tool and press the Enter key at the prompt to complete the installation and deployment of the server. Currently, the convenient deployment tool only supports Linux environments. For other environments, please refer to the development guide ([docs](./SDK_advanced_guide_online.md)).

--
Gitblit v1.9.1