From 6836bfb77f95fa20500f41948cc6685651e9fc76 Mon Sep 17 00:00:00 2001
From: 游雁 <zhifu.gzf@alibaba-inc.com>
Date: 星期一, 07 八月 2023 09:23:17 +0800
Subject: [PATCH] funasr streaming sdk
---
funasr/runtime/docs/SDK_tutorial_online.md | 196 ++++++++++++++
funasr/runtime/docs/SDK_advanced_guide_online.md | 259 ++++++++++++++++++
funasr/runtime/docs/websocket_protocol_zh.md | 2
funasr/runtime/docs/SDK_tutorial_online_zh.md | 40 --
funasr/runtime/readme.md | 20 +
funasr/runtime/docs/SDK_advanced_guide_online_zh.md | 289 ++++++++++++++++++++
README.md | 2
7 files changed, 772 insertions(+), 36 deletions(-)
diff --git a/README.md b/README.md
index 34fae61..24e62e7 100644
--- a/README.md
+++ b/README.md
@@ -28,7 +28,7 @@
<a name="whats-new"></a>
## What's new:
-- 2023/08/07: The real-time transcription service (CPU) of Mandarin has been released. For more details, please refer to ([Deployment documentation](funasr/runtime/docs/SDK_tutorial_online_zh.md)).
+- 2023/08/07: The real-time transcription service (CPU) of Mandarin has been released. For more details, please refer to ([Deployment documentation](funasr/runtime/docs/SDK_tutorial_online.md)).
- 2023/07/17: BAT is released, which is a low-latency and low-memory-consumption RNN-T model. For more details, please refer to ([BAT](egs/aishell/bat)).
- 2023/07/03: The offline file transcription service (CPU) of Mandarin has been released. For more details, please refer to ([Deployment documentation](funasr/runtime/docs/SDK_tutorial.md)).
- 2023/06/26: ASRU2023 Multi-Channel Multi-Party Meeting Transcription Challenge 2.0 completed the competition and announced the results. For more details, please refer to ([M2MeT2.0](https://alibaba-damo-academy.github.io/FunASR/m2met2/index.html)).
diff --git a/funasr/runtime/docs/SDK_advanced_guide_online.md b/funasr/runtime/docs/SDK_advanced_guide_online.md
new file mode 100644
index 0000000..4bbc69c
--- /dev/null
+++ b/funasr/runtime/docs/SDK_advanced_guide_online.md
@@ -0,0 +1,259 @@
+ # Advanced Development Guide (File transcription service)
+
+FunASR provides a Chinese offline file transcription service that can be deployed locally or on a cloud server with just one click. The core of the service is the FunASR runtime SDK, which has been open-sourced. FunASR-runtime combines various capabilities such as speech endpoint detection (VAD), large-scale speech recognition (ASR) using Paraformer-large, and punctuation detection (PUNC), which have all been open-sourced by the speech laboratory of DAMO Academy on the Modelscope community. This enables accurate and efficient high-concurrency transcription of audio files.
+
+This document serves as a development guide for the FunASR offline file transcription service. If you wish to quickly experience the offline file transcription service, please refer to the one-click deployment example for the FunASR offline file transcription service ([docs](./SDK_tutorial.md)).
+
+## Installation of Docker
+
+The following steps are for manually installing Docker and Docker images. If your Docker image has already been launched, you can ignore this step.
+
+### Installation of Docker environment
+
+```shell
+# Ubuntu锛�
+curl -fsSL https://test.docker.com -o test-docker.sh
+sudo sh test-docker.sh
+# Debian锛�
+curl -fsSL https://get.docker.com -o get-docker.sh
+sudo sh get-docker.sh
+# CentOS锛�
+curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
+# MacOS锛�
+brew install --cask --appdir=/Applications docker
+```
+
+More details could ref to [docs](https://alibaba-damo-academy.github.io/FunASR/en/installation/docker.html)
+
+### Starting Docker
+
+```shell
+sudo systemctl start docker
+```
+
+### Pulling and launching images
+
+Use the following command to pull and launch the Docker image for the FunASR runtime-SDK:
+
+```shell
+sudo docker pull registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-cpu-latest
+
+sudo docker run -p 10095:10095 -it --privileged=true -v /root:/workspace/models registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-cpu-latest
+```
+
+Introduction to command parameters:
+```text
+-p <host port>:<mapped docker port>: In the example, host machine (ECS) port 10095 is mapped to port 10095 in the Docker container. Make sure that port 10095 is open in the ECS security rules.
+
+-v <host path>:<mounted Docker path>: In the example, the host machine path /root is mounted to the Docker path /workspace/models.
+
+```
+
+
+## Starting the server
+
+Use the flollowing script to start the server 锛�
+```shell
+./run_server.sh --vad-dir damo/speech_fsmn_vad_zh-cn-16k-common-onnx \
+ --model-dir damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-onnx \
+ --punc-dir damo/punc_ct-transformer_zh-cn-common-vocab272727-onnx
+```
+
+More details about the script run_server.sh:
+
+The FunASR-wss-server supports downloading models from Modelscope. You can set the model download address (--download-model-dir, default is /workspace/models) and the model ID (--model-dir, --vad-dir, --punc-dir). Here is an example:
+
+```shell
+cd /workspace/FunASR/funasr/runtime/websocket/build/bin
+./funasr-wss-server \
+ --download-model-dir /workspace/models \
+ --model-dir damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-onnx \
+ --vad-dir damo/speech_fsmn_vad_zh-cn-16k-common-onnx \
+ --punc-dir damo/punc_ct-transformer_zh-cn-common-vocab272727-onnx \
+ --decoder-thread-num 32 \
+ --io-thread-num 8 \
+ --port 10095 \
+ --certfile ../../../ssl_key/server.crt \
+ --keyfile ../../../ssl_key/server.key
+ ```
+
+Introduction to command parameters:
+
+```text
+--download-model-dir: Model download address, download models from Modelscope by setting the model ID.
+--model-dir: Modelscope model ID.
+--quantize: True for quantized ASR model, False for non-quantized ASR model. Default is True.
+--vad-dir: Modelscope model ID.
+--vad-quant: True for quantized VAD model, False for non-quantized VAD model. Default is True.
+--punc-dir: Modelscope model ID.
+--punc-quant: True for quantized PUNC model, False for non-quantized PUNC model. Default is True.
+--port: Port number that the server listens on. Default is 10095.
+--decoder-thread-num: Number of inference threads that the server starts. Default is 8.
+--io-thread-num: Number of IO threads that the server starts. Default is 1.
+--certfile <string>: SSL certificate file. Default is ../../../ssl_key/server.crt.
+--keyfile <string>: SSL key file. Default is ../../../ssl_key/server.key.
+```
+
+The FunASR-wss-server also supports loading models from a local path (see Preparing Model Resources for detailed instructions on preparing local model resources). Here is an example:
+
+```shell
+cd /workspace/FunASR/funasr/runtime/websocket/build/bin
+./funasr-wss-server \
+ --model-dir /workspace/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-onnx \
+ --vad-dir /workspace/models/damo/speech_fsmn_vad_zh-cn-16k-common-onnx \
+ --punc-dir /workspace/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-onnx \
+ --decoder-thread-num 32 \
+ --io-thread-num 8 \
+ --port 10095 \
+ --certfile ../../../ssl_key/server.crt \
+ --keyfile ../../../ssl_key/server.key
+ ```
+
+
+## Preparing Model Resources
+
+If you choose to download models from Modelscope through the FunASR-wss-server, you can skip this step. The vad, asr, and punc model resources in the offline file transcription service of FunASR are all from Modelscope. The model addresses are shown in the table below:
+
+| Model | Modelscope url |
+|-------|------------------------------------------------------------------------------------------------------------------|
+| VAD | https://www.modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary |
+| ASR | https://www.modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/summary |
+| PUNC | https://www.modelscope.cn/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch/summary |
+
+The offline file transcription service deploys quantized ONNX models. Below are instructions on how to export ONNX models and their quantization. You can choose to export ONNX models from Modelscope, local files, or finetuned resources:
+
+### Exporting ONNX models from Modelscope
+
+Download the corresponding model with the given model name from the Modelscope website, and then export the quantized ONNX model
+
+```shell
+python -m funasr.export.export_model \
+--export-dir ./export \
+--type onnx \
+--quantize True \
+--model-name damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch \
+--model-name damo/speech_fsmn_vad_zh-cn-16k-common-pytorch \
+--model-name damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch
+```
+
+Introduction to command parameters:
+
+```text
+--model-name: The name of the model on Modelscope, for example: damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch
+--export-dir: The export directory of ONNX model.
+--type: Model type, currently supports ONNX and torch.
+--quantize: Quantize the int8 model.
+```
+
+### Exporting ONNX models from local files
+
+Set the model name to the local path of the model, and export the quantized ONNX model:
+
+```shell
+python -m funasr.export.export_model --model-name /workspace/models/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch --export-dir ./export --type onnx --quantize True
+```
+
+
+### Exporting models from finetuned resources
+
+If you want to deploy a finetuned model, you can follow these steps:
+Rename the model you want to deploy after finetuning (for example, 10epoch.pb) to model.pb, and replace the original model.pb in Modelscope with this one. If the path of the replaced model is /path/to/finetune/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch, use the following command to convert the finetuned model to an ONNX model:
+
+```shell
+python -m funasr.export.export_model --model-name /path/to/finetune/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch --export-dir ./export --type onnx --quantize True
+```
+
+## Starting the client
+
+After completing the deployment of FunASR offline file transcription service on the server, you can test and use the service by following these steps. Currently, FunASR-bin supports multiple ways to start the client. The following are command-line examples based on python-client, c++-client, and custom client Websocket communication protocol:
+
+### python-client
+```shell
+python funasr_wss_client.py --host "127.0.0.1" --port 10095 --mode offline --audio_in "./data/wav.scp" --send_without_sleep --output_dir "./results"
+```
+
+Introduction to command parameters:
+
+```text
+--host: the IP address of the server. It can be set to 127.0.0.1 for local testing.
+--port: the port number of the server listener.
+--audio_in: the audio input. Input can be a path to a wav file or a wav.scp file (a Kaldi-formatted wav list in which each line includes a wav_id followed by a tab and a wav_path).
+--output_dir: the path to the recognition result output.
+--ssl: whether to use SSL encryption. The default is to use SSL.
+--mode: offline mode.
+```
+
+### c++-client
+```shell
+. /funasr-wss-client --server-ip 127.0.0.1 --port 10095 --wav-path test.wav --thread-num 1 --is-ssl 1
+```
+
+Introduction to command parameters:
+
+```text
+--host: the IP address of the server. It can be set to 127.0.0.1 for local testing.
+--port: the port number of the server listener.
+--audio_in: the audio input. Input can be a path to a wav file or a wav.scp file (a Kaldi-formatted wav list in which each line includes a wav_id followed by a tab and a wav_path).
+--output_dir: the path to the recognition result output.
+--ssl: whether to use SSL encryption. The default is to use SSL.
+--mode: offline mode.
+```
+
+### Custom client
+
+If you want to define your own client, the Websocket communication protocol is as follows:
+
+```text
+# First communication
+{"mode": "offline", "wav_name": wav_name, "is_speaking": True}
+# Send wav data
+Bytes data
+# Send end flag
+{"is_speaking": False}
+```
+
+## How to customize service deployment
+
+The code for FunASR-runtime is open source. If the server and client cannot fully meet your needs, you can further develop them based on your own requirements:
+
+### C++ client
+
+https://github.com/alibaba-damo-academy/FunASR/tree/main/funasr/runtime/websocket
+
+### Python client
+
+https://github.com/alibaba-damo-academy/FunASR/tree/main/funasr/runtime/python/websocket
+
+### C++ server
+
+#### VAD
+```c++
+// The use of the VAD model consists of two steps: FsmnVadInit and FsmnVadInfer:
+FUNASR_HANDLE vad_hanlde=FsmnVadInit(model_path, thread_num);
+// Where: model_path contains "model-dir" and "quantize", thread_num is the ONNX thread count;
+FUNASR_RESULT result=FsmnVadInfer(vad_hanlde, wav_file.c_str(), NULL, 16000);
+// Where: vad_hanlde is the return value of FunOfflineInit, wav_file is the path to the audio file, and sampling_rate is the sampling rate (default 16k).
+```
+
+See the usage example for details [docs](https://github.com/alibaba-damo-academy/FunASR/blob/main/funasr/runtime/onnxruntime/bin/funasr-onnx-offline-vad.cpp)
+
+#### ASR
+```text
+// The use of the ASR model consists of two steps: FunOfflineInit and FunOfflineInfer:
+FUNASR_HANDLE asr_hanlde=FunOfflineInit(model_path, thread_num);
+// Where: model_path contains "model-dir" and "quantize", thread_num is the ONNX thread count;
+FUNASR_RESULT result=FunOfflineInfer(asr_hanlde, wav_file.c_str(), RASR_NONE, NULL, 16000);
+// Where: asr_hanlde is the return value of FunOfflineInit, wav_file is the path to the audio file, and sampling_rate is the sampling rate (default 16k).
+```
+
+See the usage example for details, [docs](https://github.com/alibaba-damo-academy/FunASR/blob/main/funasr/runtime/onnxruntime/bin/funasr-onnx-offline.cpp)
+
+#### PUNC
+```text
+// The use of the PUNC model consists of two steps: CTTransformerInit and CTTransformerInfer:
+FUNASR_HANDLE punc_hanlde=CTTransformerInit(model_path, thread_num);
+// Where: model_path contains "model-dir" and "quantize", thread_num is the ONNX thread count;
+FUNASR_RESULT result=CTTransformerInfer(punc_hanlde, txt_str.c_str(), RASR_NONE, NULL);
+// Where: punc_hanlde is the return value of CTTransformerInit, txt_str is the text
+```
+See the usage example for details, [docs](https://github.com/alibaba-damo-academy/FunASR/blob/main/funasr/runtime/onnxruntime/bin/funasr-onnx-offline-punc.cpp)
diff --git a/funasr/runtime/docs/SDK_advanced_guide_online_zh.md b/funasr/runtime/docs/SDK_advanced_guide_online_zh.md
new file mode 100644
index 0000000..cc47510
--- /dev/null
+++ b/funasr/runtime/docs/SDK_advanced_guide_online_zh.md
@@ -0,0 +1,289 @@
+# FunASR瀹炴椂璇煶杞啓鏈嶅姟寮�鍙戞寚鍗�
+
+FunASR鎻愪緵鍙究鎹锋湰鍦版垨鑰呬簯绔湇鍔″櫒閮ㄧ讲鐨勫疄鏃惰闊宠浆鍐欐湇鍔★紝鍐呮牳涓篎unASR宸插紑婧恟untime-SDK銆�
+闆嗘垚浜嗚揪鎽╅櫌璇煶瀹為獙瀹ゅ湪Modelscope绀惧尯寮�婧愮殑璇煶绔偣妫�娴�(VAD)銆丳araformer-large闈炴祦寮忚闊宠瘑鍒�(ASR)銆丳araformer-large娴佸紡璇煶璇嗗埆(ASR)銆佹爣鐐规仮澶�(PUNC) 绛夌浉鍏宠兘鍔涖�傝蒋浠跺寘鏃㈠彲浠ュ疄鏃跺湴杩涜璇煶杞枃瀛楋紝鑰屼笖鑳藉鍦ㄨ璇濆彞灏剧敤楂樼簿搴︾殑杞啓鏂囧瓧淇杈撳嚭锛岃緭鍑烘枃瀛楀甫鏈夋爣鐐癸紝鏀寔楂樺苟鍙戝璺姹�
+
+鏈枃妗d负FunASR绂荤嚎鏂囦欢杞啓鏈嶅姟寮�鍙戞寚鍗椼�傚鏋滄偍鎯冲揩閫熶綋楠屽疄鏃惰闊宠浆鍐欐湇鍔★紝鍙弬鑰僛蹇�熶笂鎵媇(#蹇�熶笂鎵�)銆�
+
+## 蹇�熶笂鎵�
+### 闀滃儚鍚姩
+
+閫氳繃涓嬭堪鍛戒护鎷夊彇骞跺惎鍔‵unASR runtime-SDK鐨刣ocker闀滃儚锛�
+
+```shell
+sudo docker pull registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-online-cpu-0.1.0
+
+sudo docker run -p 10095:10095 -it --privileged=true -v /root:/workspace/models registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-online-cpu-0.1.0
+```
+濡傛灉鎮ㄦ病鏈夊畨瑁卍ocker锛屽彲鍙傝�僛Docker瀹夎](#Docker瀹夎)
+
+### 鏈嶅姟绔惎鍔�
+
+docker鍚姩涔嬪悗锛屽惎鍔� funasr-wss-server鏈嶅姟绋嬪簭锛�
+```shell
+cd FunASR/funasr/runtime
+./run_server.sh \
+ --download-model-dir /workspace/models \
+ --vad-dir damo/speech_fsmn_vad_zh-cn-16k-common-onnx \
+ --model-dir damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-onnx \
+ --punc-dir damo/punc_ct-transformer_zh-cn-common-vocab272727-onnx
+```
+鏈嶅姟绔缁嗗弬鏁颁粙缁嶅彲鍙傝�僛鏈嶅姟绔弬鏁颁粙缁峕(#鏈嶅姟绔弬鏁颁粙缁�)
+### 瀹㈡埛绔祴璇曚笌浣跨敤
+
+涓嬭浇瀹㈡埛绔祴璇曞伐鍏风洰褰晄amples
+```shell
+wget https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/sample/funasr_samples.tar.gz
+```
+鎴戜滑浠ython璇█瀹㈡埛绔负渚嬶紝杩涜璇存槑锛屾敮鎸佸绉嶉煶棰戞牸寮忚緭鍏ワ紙.wav, .pcm, .mp3绛夛級锛屼篃鏀寔瑙嗛杈撳叆(.mp4绛�)锛屼互鍙婂鏂囦欢鍒楄〃wav.scp杈撳叆锛屽叾浠栫増鏈鎴风璇峰弬鑰冩枃妗o紙[鐐瑰嚮姝ゅ](#瀹㈡埛绔敤娉曡瑙�)锛夛紝瀹氬埗鏈嶅姟閮ㄧ讲璇峰弬鑰僛濡備綍瀹氬埗鏈嶅姟閮ㄧ讲](#濡備綍瀹氬埗鏈嶅姟閮ㄧ讲)
+```shell
+python3 wss_client_asr.py --host "127.0.0.1" --port 10095 --mode 2pass
+```
+
+------------------
+## Docker瀹夎
+
+涓嬭堪姝ラ涓烘墜鍔ㄥ畨瑁卍ocker鐜鐨勬楠わ細
+
+### docker鐜瀹夎
+```shell
+# Ubuntu锛�
+curl -fsSL https://test.docker.com -o test-docker.sh
+sudo sh test-docker.sh
+# Debian锛�
+curl -fsSL https://get.docker.com -o get-docker.sh
+sudo sh get-docker.sh
+# CentOS锛�
+curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
+# MacOS锛�
+brew install --cask --appdir=/Applications docker
+```
+
+瀹夎璇﹁锛歨ttps://alibaba-damo-academy.github.io/FunASR/en/installation/docker.html
+
+### docker鍚姩
+
+```shell
+sudo systemctl start docker
+```
+
+
+## 瀹㈡埛绔敤娉曡瑙�
+
+鍦ㄦ湇鍔″櫒涓婂畬鎴怓unASR鏈嶅姟閮ㄧ讲浠ュ悗锛屽彲浠ラ�氳繃濡備笅鐨勬楠ゆ潵娴嬭瘯鍜屼娇鐢ㄧ绾挎枃浠惰浆鍐欐湇鍔°��
+鐩墠鍒嗗埆鏀寔浠ヤ笅鍑犵缂栫▼璇█瀹㈡埛绔�
+
+- [Python](#python-client)
+- [CPP](#cpp-client)
+- [html缃戦〉鐗堟湰](#Html缃戦〉鐗�)
+- [Java](#Java-client)
+
+### python-client
+鑻ユ兂鐩存帴杩愯client杩涜娴嬭瘯锛屽彲鍙傝�冨涓嬬畝鏄撹鏄庯紝浠ython鐗堟湰涓轰緥锛�
+
+```shell
+python3 wss_client_asr.py --host "127.0.0.1" --port 10095 --mode offline --audio_in "../audio/asr_example.wav" --output_dir "./results"
+```
+
+鍛戒护鍙傛暟璇存槑锛�
+```text
+--host 涓篎unASR runtime-SDK鏈嶅姟閮ㄧ讲鏈哄櫒ip锛岄粯璁や负鏈満ip锛�127.0.0.1锛夛紝濡傛灉client涓庢湇鍔′笉鍦ㄥ悓涓�鍙版湇鍔″櫒锛岄渶瑕佹敼涓洪儴缃叉満鍣╥p
+--port 10095 閮ㄧ讲绔彛鍙�
+--mode offline琛ㄧず绂荤嚎鏂囦欢杞啓
+--audio_in 闇�瑕佽繘琛岃浆鍐欑殑闊抽鏂囦欢锛屾敮鎸佹枃浠惰矾寰勶紝鏂囦欢鍒楄〃wav.scp
+--output_dir 璇嗗埆缁撴灉淇濆瓨璺緞
+```
+
+### cpp-client
+杩涘叆samples/cpp鐩綍鍚庯紝鍙互鐢╟pp杩涜娴嬭瘯锛屾寚浠ゅ涓嬶細
+```shell
+./funasr-wss-client --server-ip 127.0.0.1 --port 10095 --wav-path ../audio/asr_example.wav
+```
+
+鍛戒护鍙傛暟璇存槑锛�
+
+```text
+--server-ip 涓篎unASR runtime-SDK鏈嶅姟閮ㄧ讲鏈哄櫒ip锛岄粯璁や负鏈満ip锛�127.0.0.1锛夛紝濡傛灉client涓庢湇鍔′笉鍦ㄥ悓涓�鍙版湇鍔″櫒锛岄渶瑕佹敼涓洪儴缃叉満鍣╥p
+--port 10095 閮ㄧ讲绔彛鍙�
+--wav-path 闇�瑕佽繘琛岃浆鍐欑殑闊抽鏂囦欢锛屾敮鎸佹枃浠惰矾寰�
+```
+
+### Html缃戦〉鐗�
+
+鍦ㄦ祻瑙堝櫒涓墦寮� html/static/index.html锛屽嵆鍙嚭鐜板涓嬮〉闈紝鏀寔楹﹀厠椋庤緭鍏ヤ笌鏂囦欢涓婁紶锛岀洿鎺ヨ繘琛屼綋楠�
+
+<img src="images/html.png" width="900"/>
+
+### Java-client
+
+```shell
+FunasrWsClient --host localhost --port 10095 --audio_in ./asr_example.wav --mode offline
+```
+璇︾粏鍙互鍙傝�冩枃妗o紙[鐐瑰嚮姝ゅ](../java/readme.md)锛�
+
+
+
+## 鏈嶅姟绔弬鏁颁粙缁嶏細
+
+funasr-wss-server鏀寔浠嶮odelscope涓嬭浇妯″瀷锛岃缃ā鍨嬩笅杞藉湴鍧�锛�--download-model-dir锛岄粯璁や负/workspace/models锛夊強model ID锛�--model-dir銆�--vad-dir銆�--punc-dir锛�,绀轰緥濡備笅锛�
+```shell
+cd /workspace/FunASR/funasr/runtime/websocket/build/bin
+./funasr-wss-server \
+ --download-model-dir /workspace/models \
+ --model-dir damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-onnx \
+ --vad-dir damo/speech_fsmn_vad_zh-cn-16k-common-onnx \
+ --punc-dir damo/punc_ct-transformer_zh-cn-common-vocab272727-onnx \
+ --decoder-thread-num 32 \
+ --io-thread-num 8 \
+ --port 10095 \
+ --certfile ../../../ssl_key/server.crt \
+ --keyfile ../../../ssl_key/server.key
+ ```
+鍛戒护鍙傛暟浠嬬粛锛�
+```text
+--download-model-dir 妯″瀷涓嬭浇鍦板潃锛岄�氳繃璁剧疆model ID浠嶮odelscope涓嬭浇妯″瀷
+--model-dir modelscope model ID
+--quantize True涓洪噺鍖朅SR妯″瀷锛孎alse涓洪潪閲忓寲ASR妯″瀷锛岄粯璁ゆ槸True
+--vad-dir modelscope model ID
+--vad-quant True涓洪噺鍖朧AD妯″瀷锛孎alse涓洪潪閲忓寲VAD妯″瀷锛岄粯璁ゆ槸True
+--punc-dir modelscope model ID
+--punc-quant True涓洪噺鍖朠UNC妯″瀷锛孎alse涓洪潪閲忓寲PUNC妯″瀷锛岄粯璁ゆ槸True
+--port 鏈嶅姟绔洃鍚殑绔彛鍙凤紝榛樿涓� 10095
+--decoder-thread-num 鏈嶅姟绔惎鍔ㄧ殑鎺ㄧ悊绾跨▼鏁帮紝榛樿涓� 8
+--io-thread-num 鏈嶅姟绔惎鍔ㄧ殑IO绾跨▼鏁帮紝榛樿涓� 1
+--certfile ssl鐨勮瘉涔︽枃浠讹紝榛樿涓猴細../../../ssl_key/server.crt
+--keyfile ssl鐨勫瘑閽ユ枃浠讹紝榛樿涓猴細../../../ssl_key/server.key
+```
+
+funasr-wss-server鍚屾椂涔熸敮鎸佷粠鏈湴璺緞鍔犺浇妯″瀷锛堟湰鍦版ā鍨嬭祫婧愬噯澶囪瑙乕妯″瀷璧勬簮鍑嗗](#妯″瀷璧勬簮鍑嗗)锛夌ず渚嬪涓嬶細
+```shell
+cd /workspace/FunASR/funasr/runtime/websocket/build/bin
+./funasr-wss-server \
+ --model-dir /workspace/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-onnx \
+ --vad-dir /workspace/models/damo/speech_fsmn_vad_zh-cn-16k-common-onnx \
+ --punc-dir /workspace/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-onnx \
+ --decoder-thread-num 32 \
+ --io-thread-num 8 \
+ --port 10095 \
+ --certfile ../../../ssl_key/server.crt \
+ --keyfile ../../../ssl_key/server.key
+ ```
+鍛戒护鍙傛暟浠嬬粛锛�
+```text
+--model-dir ASR妯″瀷璺緞锛岄粯璁や负锛�/workspace/models/asr
+--quantize True涓洪噺鍖朅SR妯″瀷锛孎alse涓洪潪閲忓寲ASR妯″瀷锛岄粯璁ゆ槸True
+--vad-dir VAD妯″瀷璺緞锛岄粯璁や负锛�/workspace/models/vad
+--vad-quant True涓洪噺鍖朧AD妯″瀷锛孎alse涓洪潪閲忓寲VAD妯″瀷锛岄粯璁ゆ槸True
+--punc-dir PUNC妯″瀷璺緞锛岄粯璁や负锛�/workspace/models/punc
+--punc-quant True涓洪噺鍖朠UNC妯″瀷锛孎alse涓洪潪閲忓寲PUNC妯″瀷锛岄粯璁ゆ槸True
+--port 鏈嶅姟绔洃鍚殑绔彛鍙凤紝榛樿涓� 10095
+--decoder-thread-num 鏈嶅姟绔惎鍔ㄧ殑鎺ㄧ悊绾跨▼鏁帮紝榛樿涓� 8
+--io-thread-num 鏈嶅姟绔惎鍔ㄧ殑IO绾跨▼鏁帮紝榛樿涓� 1
+--certfile ssl鐨勮瘉涔︽枃浠讹紝榛樿涓猴細../../../ssl_key/server.crt
+--keyfile ssl鐨勫瘑閽ユ枃浠讹紝榛樿涓猴細../../../ssl_key/server.key
+```
+
+## 妯″瀷璧勬簮鍑嗗
+
+濡傛灉鎮ㄩ�夋嫨閫氳繃funasr-wss-server浠嶮odelscope涓嬭浇妯″瀷锛屽彲浠ヨ烦杩囨湰姝ラ銆�
+
+FunASR绂荤嚎鏂囦欢杞啓鏈嶅姟涓殑vad銆乤sr鍜宲unc妯″瀷璧勬簮鍧囨潵鑷狹odelscope锛屾ā鍨嬪湴鍧�璇﹁涓嬭〃锛�
+
+| 妯″瀷 | Modelscope閾炬帴 |
+|------|---------------------------------------------------------------------------------------------------------------|
+| VAD | https://www.modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-onnx/summary |
+| ASR | https://www.modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-onnx/summary |
+| PUNC | https://www.modelscope.cn/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-onnx/summary |
+
+绂荤嚎鏂囦欢杞啓鏈嶅姟涓儴缃茬殑鏄噺鍖栧悗鐨凮NNX妯″瀷锛屼笅闈粙缁嶄笅濡備綍瀵煎嚭ONNX妯″瀷鍙婂叾閲忓寲锛氭偍鍙互閫夋嫨浠嶮odelscope瀵煎嚭ONNX妯″瀷銆佷粠finetune鍚庣殑璧勬簮瀵煎嚭妯″瀷锛�
+
+### 浠嶮odelscope瀵煎嚭ONNX妯″瀷
+
+浠嶮odelscope缃戠珯涓嬭浇瀵瑰簲model name鐨勬ā鍨嬶紝鐒跺悗瀵煎嚭閲忓寲鍚庣殑ONNX妯″瀷锛�
+
+```shell
+python -m funasr.export.export_model \
+--export-dir ./export \
+--type onnx \
+--quantize True \
+--model-name damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch \
+--model-name damo/speech_fsmn_vad_zh-cn-16k-common-pytorch \
+--model-name damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch
+```
+
+鍛戒护鍙傛暟浠嬬粛锛�
+```text
+--model-name Modelscope涓婄殑妯″瀷鍚嶇О锛屼緥濡俤amo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch
+--export-dir ONNX妯″瀷瀵煎嚭鍦板潃
+--type 妯″瀷绫诲瀷锛岀洰鍓嶆敮鎸� ONNX銆乼orch
+--quantize int8妯″瀷閲忓寲
+```
+### 浠巉inetune鍚庣殑璧勬簮瀵煎嚭妯″瀷
+
+鍋囧鎮ㄦ兂閮ㄧ讲finetune鍚庣殑妯″瀷锛屽彲浠ュ弬鑰冨涓嬫楠わ細
+
+灏嗘偍finetune鍚庨渶瑕侀儴缃茬殑妯″瀷锛堜緥濡�10epoch.pb锛夛紝閲嶅懡鍚嶄负model.pb锛屽苟灏嗗師modelscope涓ā鍨媘odel.pb鏇挎崲鎺夛紝鍋囧鏇挎崲鍚庣殑妯″瀷璺緞涓�/path/to/finetune/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch锛岄�氳繃涓嬭堪鍛戒护鎶奻inetune鍚庣殑妯″瀷杞垚onnx妯″瀷锛�
+
+```shell
+python -m funasr.export.export_model --model-name /path/to/finetune/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch --export-dir ./export --type onnx --quantize True
+```
+
+
+
+## 濡備綍瀹氬埗鏈嶅姟閮ㄧ讲
+
+FunASR-runtime鐨勪唬鐮佸凡寮�婧愶紝濡傛灉鏈嶅姟绔拰瀹㈡埛绔笉鑳藉緢濂界殑婊¤冻鎮ㄧ殑闇�姹傦紝鎮ㄥ彲浠ユ牴鎹嚜宸辩殑闇�姹傝繘琛岃繘涓�姝ョ殑寮�鍙戯細
+### c++ 瀹㈡埛绔細
+
+https://github.com/alibaba-damo-academy/FunASR/tree/main/funasr/runtime/websocket
+
+### python 瀹㈡埛绔細
+
+https://github.com/alibaba-damo-academy/FunASR/tree/main/funasr/runtime/python/websocket
+
+### 鑷畾涔夊鎴风锛�
+
+濡傛灉鎮ㄦ兂瀹氫箟鑷繁鐨刢lient锛寃ebsocket閫氫俊鍗忚涓猴細
+
+```text
+# 棣栨閫氫俊
+{"mode": "offline", "wav_name": wav_name, "is_speaking": True}
+# 鍙戦�亀av鏁版嵁
+bytes鏁版嵁
+# 鍙戦�佺粨鏉熸爣蹇�
+{"is_speaking": False}
+```
+
+### c++ 鏈嶅姟绔細
+
+#### VAD
+```c++
+// VAD妯″瀷鐨勪娇鐢ㄥ垎涓篎smnVadInit鍜孎smnVadInfer涓や釜姝ラ锛�
+FUNASR_HANDLE vad_hanlde=FsmnVadInit(model_path, thread_num);
+// 鍏朵腑锛歮odel_path 鍖呭惈"model-dir"銆�"quantize"锛宼hread_num涓簅nnx绾跨▼鏁帮紱
+FUNASR_RESULT result=FsmnVadInfer(vad_hanlde, wav_file.c_str(), NULL, 16000);
+// 鍏朵腑锛歷ad_hanlde涓篎unOfflineInit杩斿洖鍊硷紝wav_file涓洪煶棰戣矾寰勶紝sampling_rate涓洪噰鏍风巼(榛樿16k)
+```
+
+浣跨敤绀轰緥璇﹁锛歨ttps://github.com/alibaba-damo-academy/FunASR/blob/main/funasr/runtime/onnxruntime/bin/funasr-onnx-online-vad.cpp
+
+#### ASR
+```text
+// ASR妯″瀷鐨勪娇鐢ㄥ垎涓篎unOfflineInit鍜孎unOfflineInfer涓や釜姝ラ锛�
+FUNASR_HANDLE asr_hanlde=FunOfflineInit(model_path, thread_num);
+// 鍏朵腑锛歮odel_path 鍖呭惈"model-dir"銆�"quantize"锛宼hread_num涓簅nnx绾跨▼鏁帮紱
+FUNASR_RESULT result=FunOfflineInfer(asr_hanlde, wav_file.c_str(), RASR_NONE, NULL, 16000);
+// 鍏朵腑锛歛sr_hanlde涓篎unOfflineInit杩斿洖鍊硷紝wav_file涓洪煶棰戣矾寰勶紝sampling_rate涓洪噰鏍风巼(榛樿16k)
+```
+
+浣跨敤绀轰緥璇﹁锛歨ttps://github.com/alibaba-damo-academy/FunASR/blob/main/funasr/runtime/onnxruntime/bin/funasr-onnx-offline.cpp
+
+#### PUNC
+```text
+// PUNC妯″瀷鐨勪娇鐢ㄥ垎涓篊TTransformerInit鍜孋TTransformerInfer涓や釜姝ラ锛�
+FUNASR_HANDLE punc_hanlde=CTTransformerInit(model_path, thread_num);
+// 鍏朵腑锛歮odel_path 鍖呭惈"model-dir"銆�"quantize"锛宼hread_num涓簅nnx绾跨▼鏁帮紱
+FUNASR_RESULT result=CTTransformerInfer(punc_hanlde, txt_str.c_str(), RASR_NONE, NULL);
+// 鍏朵腑锛歱unc_hanlde涓篊TTransformerInit杩斿洖鍊硷紝txt_str涓烘枃鏈�
+```
+浣跨敤绀轰緥璇﹁锛歨ttps://github.com/alibaba-damo-academy/FunASR/blob/main/funasr/runtime/onnxruntime/bin/funasr-onnx-online-punc.cpp
\ No newline at end of file
diff --git a/funasr/runtime/docs/SDK_tutorial_online.md b/funasr/runtime/docs/SDK_tutorial_online.md
new file mode 100644
index 0000000..3ec6137
--- /dev/null
+++ b/funasr/runtime/docs/SDK_tutorial_online.md
@@ -0,0 +1,196 @@
+([绠�浣撲腑鏂嘳(./SDK_tutorial_online_zh.md)|English)
+
+# FunASR瀹炴椂璇煶杞啓渚挎嵎閮ㄧ讲鏁欑▼
+
+FunASR offers a real-time speech-to-text service that can be easily deployed locally or on cloud servers. The service integrates various capabilities including voice activity detection (VAD) developed by the speech laboratory of DAMO Academy on the ModelScope, Paraformer-large non-streaming automatic speech recognition (ASR), Paraformer-large streaming ASR, and punctuation recovery (PUNC). The software package not only performs real-time speech-to-text conversion, but also allows high-precision transcription text correction at the end of each sentence and outputs text with punctuation, supporting high-concurrency multiple requests.
+
+## Server Configuration
+
+Users can choose appropriate server configurations based on their business needs. The recommended configurations are:
+- Configuration 1: (X86, computing-type) 4-core vCPU, 8GB memory, and a single machine can support about 32 requests.
+- Configuration 2: (X86, computing-type) 16-core vCPU, 32GB memory, and a single machine can support about 64 requests.
+- Configuration 3: (X86, computing-type) 64-core vCPU, 128GB memory, and a single machine can support about 200 requests.
+
+Detailed performance [report](./benchmark_onnx_cpp.md)
+
+Cloud service providers offer a 3-month free trial for new users. Application tutorial ([docs](./aliyun_server_tutorial.md)).
+
+## Quick Start
+
+### Server Startup
+
+`Note`: The one-click deployment tool process includes installing Docker, downloading Docker images, and starting the service. If the user wants to start from the FunASR Docker image, please refer to the development guide ([docs](./SDK_advanced_guide_online.md).
+
+Download the deployment tool `funasr-runtime-deploy-online-cpu-zh.sh`
+
+```shell
+curl -O https://raw.githubusercontent.com/alibaba-damo-academy/FunASR/main/funasr/runtime/deploy_tools/funasr-runtime-deploy-online-cpu-en.sh;
+# If there is a network problem, users in mainland China can use the following command:
+# curl -O https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/shell/funasr-runtime-deploy-online-cpu-en.sh;
+```
+
+Execute the deployment tool and press the Enter key at the prompt to complete the installation and deployment of the server. Currently, the convenient deployment tool only supports Linux environments. For other environments, please refer to the development guide ([docs](./SDK_advanced_guide_offline.md)).
+```shell
+sudo bash funasr-runtime-deploy-online-cpu-zh.sh install --workspace ./funasr-runtime-resources
+```
+
+### Client Testing and Usage
+
+After running the above installation instructions, the client testing tool directory samples will be downloaded in the default installation directory ./funasr-runtime-resources ([download click](https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/sample/funasr_samples.tar.gz)).
+We take the Python language client as an example to explain that it supports multiple audio format inputs (such as .wav, .pcm, .mp3, etc.), video inputs (.mp4, etc.), and multiple file list wav.scp inputs. For other client versions, please refer to the [documentation](#Detailed-Description-of-Client-Usage).
+
+```shell
+python3 funasr_wss_client.py --host "127.0.0.1" --port 10095 --mode 2pass
+```
+
+## Detailed Description of Client Usage
+
+After completing the FunASR runtime-SDK service deployment on the server, you can test and use the offline file transcription service through the following steps. Currently, the following programming language client versions are supported:
+
+- [Python](#python-client)
+- [CPP](#cpp-client)
+- [html](#html-client)
+- [java](#java-client)
+
+For more client version support, please refer to the [websocket_protocol](./websocket_protocol_zh.md).
+
+### python-client
+If you want to run the client directly for testing, you can refer to the following simple instructions, using the Python version as an example:
+
+```shell
+python3 funasr_wss_client.py --host "127.0.0.1" --port 10095 --mode offline --audio_in "../audio/asr_example.wav"
+```
+
+Command parameter instructions:
+```text
+--host is the IP address of the FunASR runtime-SDK service deployment machine, which defaults to the local IP address (127.0.0.1). If the client and the service are not on the same server, it needs to be changed to the deployment machine IP address.
+--port 10095 deployment port number
+--mode: `offline` indicates that the inference mode is one-sentence recognition; `online` indicates that the inference mode is real-time speech recognition; `2pass` indicates real-time speech recognition, and offline models are used for error correction at the end of each sentence.
+--chunk_size: indicates the latency configuration of the streaming model. [5,10,5] indicates that the current audio is 600ms, with a lookback of 300ms and a lookahead of 300ms.
+--audio_in is the audio file that needs to be transcribed, supporting file paths and file list wav.scp
+--thread_num sets the number of concurrent sending threads, default is 1
+--ssl sets whether to enable SSL certificate verification, default is 1 to enable, and 0 to disable
+```
+
+### cpp-client
+
+After entering the samples/cpp directory, you can test it with CPP. The command is as follows:
+```shell
+./funasr-wss-client --server-ip 127.0.0.1 --port 10095 --wav-path ../audio/asr_example.wav
+```
+
+Command parameter description:
+```text
+--server-ip specifies the IP address of the machine where the FunASR runtime-SDK service is deployed. The default value is the local IP address (127.0.0.1). If the client and the service are not on the same server, the IP address needs to be changed to the IP address of the deployment machine.
+--port specifies the deployment port number as 10095.
+--mode: `offline` indicates that the inference mode is one-sentence recognition; `online` indicates that the inference mode is real-time speech recognition; `2pass` indicates real-time speech recognition, and offline models are used for error correction at the end of each sentence.
+--chunk_size: indicates the latency configuration of the streaming model. [5,10,5] indicates that the current audio is 600ms, with a lookback of 300ms and a lookahead of 300ms.
+--wav-path specifies the audio file to be transcribed, and supports file paths.
+--thread_num sets the number of concurrent send threads, with a default value of 1.
+--ssl sets whether to enable SSL certificate verification, with a default value of 1 for enabling and 0 for disabling.
+```
+
+### html-client
+
+To experience it directly, open `html/static/index.html` in your browser. You will see the following page, which supports microphone input and file upload.
+<img src="images/html.png" width="900"/>
+
+### java-client
+
+```shell
+FunasrWsClient --host localhost --port 10095 --audio_in ./asr_example.wav --mode offline
+```
+For more details, please refer to the [docs](../java/readme.md)
+
+## Server Usage Details
+
+### Start the deployed FunASR service
+
+If you have restarted the computer or shut down Docker after one-click deployment, you can start the FunASR service directly with the following command. The startup configuration is the same as the last one-click deployment.
+
+```shell
+sudo bash funasr-runtime-deploy-online-cpu-zh.sh start
+```
+
+### Set SSL
+
+SSL verification is enabled by default. If you need to disable it, you can set it when starting.
+```shell
+sudo bash funasr-runtime-deploy-online-cpu-zh.sh --ssl 0
+```
+
+### Stop the FunASR service
+
+```shell
+sudo bash funasr-runtime-deploy-online-cpu-zh.sh stop
+```
+
+### Release the FunASR service
+
+Release the deployed FunASR service.
+```shell
+sudo bash funasr-runtime-deploy-online-cpu-zh.sh remove
+```
+
+### Restart the FunASR service
+
+Restart the FunASR service with the same configuration as the last one-click deployment.
+```shell
+sudo bash funasr-runtime-deploy-online-cpu-zh.sh restart
+```
+
+### Replace the model and restart the FunASR service
+
+Replace the currently used model, and restart the FunASR service. The model must be an ASR/VAD/PUNC model in ModelScope, or a finetuned model from ModelScope.
+
+```shell
+sudo bash funasr-runtime-deploy-online-cpu-zh.sh update [--asr_model | --vad_model | --punc_model] <model_id or local model path>
+
+e.g
+sudo bash funasr-runtime-deploy-online-cpu-zh.sh update --asr_model damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch
+```
+
+### Update parameters and restart the FunASR service
+
+Update the configured parameters and restart the FunASR service to take effect. The parameters that can be updated include the host and Docker port numbers, as well as the number of inference and IO threads.
+
+```shell
+sudo bash funasr-runtime-deploy-online-cpu-zh.sh update [--host_port | --docker_port] <port number>
+sudo bash funasr-runtime-deploy-online-cpu-zh.sh update [--decode_thread_num | --io_thread_num] <the number of threads>
+sudo bash funasr-runtime-deploy-online-cpu-zh.sh update [--workspace] <workspace in local>
+sudo bash funasr-runtime-deploy-online-cpu-zh.sh update [--ssl] <0: close SSL; 1: open SSL, default:1>
+
+e.g
+sudo bash funasr-runtime-deploy-online-cpu-zh.sh update --decode_thread_num 32
+sudo bash funasr-runtime-deploy-online-cpu-zh.sh update --workspace ./funasr-runtime-resources
+```
+
+
+
+## Contact Us
+
+If you encounter any problems during use, please join our user group for feedback.
+
+
+| DingDing Group | Wechat |
+|:----------------------------------------------------------------------------:|:--------------------------------------------------------------:|
+| <div align="left"><img src="../../../docs/images/dingding.jpg" width="250"/> | <img src="../../../docs/images/wechat.png" width="232"/></div> |
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/funasr/runtime/docs/SDK_tutorial_online_zh.md b/funasr/runtime/docs/SDK_tutorial_online_zh.md
index 5150d06..400f2c0 100644
--- a/funasr/runtime/docs/SDK_tutorial_online_zh.md
+++ b/funasr/runtime/docs/SDK_tutorial_online_zh.md
@@ -115,6 +115,13 @@
sudo bash funasr-runtime-deploy-online-cpu-zh.sh start
```
+### 璁剧疆SSL
+
+榛樿寮�鍚疭SL鏍¢獙锛屽鏋滈渶瑕佸叧闂紝鍙互鍦ㄥ惎鍔ㄦ椂璁剧疆
+```shell
+sudo bash funasr-runtime-deploy-online-cpu-zh.sh start --ssl 0
+```
+
### 鍏抽棴FunASR鏈嶅姟
```shell
@@ -161,39 +168,6 @@
sudo bash funasr-runtime-deploy-online-cpu-zh.sh update --workspace ./funasr-runtime-resources
```
-
-## 鏈嶅姟绔惎鍔ㄨ繃绋嬮厤缃瑙�
-
-### 閫夋嫨FunASR Docker闀滃儚
-鎺ㄨ崘閫夋嫨1)浣跨敤鎴戜滑鐨勬渶鏂板彂甯冪増闀滃儚锛屼篃鍙�夋嫨鍘嗗彶鐗堟湰銆�
-```text
-[1/5]
- Getting the list of docker images, please wait a few seconds.
- [DONE]
-
- Please choose the Docker image.
- 1) registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-cpu-0.1.0
- Enter your choice, default(1):
- You have chosen the Docker image: registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-cpu-0.1.0
-```
-
-
-### 璁剧疆瀹夸富鏈烘彁渚涚粰FunASR鐨勭鍙�
-璁剧疆鎻愪緵缁橠ocker鐨勫涓绘満绔彛锛岄粯璁や负10095銆傝淇濊瘉姝ょ鍙e彲鐢ㄣ��
-```text
-[2/5]
- Please input the opened port in the host used for FunASR server.
- Setting the opened host port [1-65535], default(10095):
- The port of the host is 10095
- The port in Docker for FunASR server is 10095
-```
-
-### 璁剧疆SSL
-
-榛樿寮�鍚疭SL鏍¢獙锛屽鏋滈渶瑕佸叧闂紝鍙互鍦ㄥ惎鍔ㄦ椂璁剧疆
-```shell
-sudo bash funasr-runtime-deploy-online-cpu-zh.sh start --ssl 0
-```
## 鑱旂郴鎴戜滑
diff --git a/funasr/runtime/docs/websocket_protocol_zh.md b/funasr/runtime/docs/websocket_protocol_zh.md
index 848e802..4b07a45 100644
--- a/funasr/runtime/docs/websocket_protocol_zh.md
+++ b/funasr/runtime/docs/websocket_protocol_zh.md
@@ -52,7 +52,7 @@
#### 棣栨閫氫俊
message涓猴紙闇�瑕佺敤json搴忓垪鍖栵級锛�
```text
-{"mode": "offline", "wav_name": "wav_name", "is_speaking": True, "wav_format":"pcm", "chunk_size":[5,10,5]
+{"mode": "2pass", "wav_name": "wav_name", "is_speaking": True, "wav_format":"pcm", "chunk_size":[5,10,5]
```
鍙傛暟浠嬬粛锛�
```text
diff --git a/funasr/runtime/readme.md b/funasr/runtime/readme.md
index 1a535e1..7a82521 100644
--- a/funasr/runtime/readme.md
+++ b/funasr/runtime/readme.md
@@ -5,11 +5,29 @@
It has attracted many developers to participate in experiencing and developing. To solve the last mile of industrial landing and integrate models into business, we have developed the FunASR runtime-SDK. The SDK supports several service deployments, including:
- File transcription service, Mandarin, CPU version, done
+- The real-time transcription service, Mandarin (CPU), done
- File transcription service, Mandarin, GPU version, in progress
- File transcription service, English, in progress
-- Streaming speech recognition service, is in progress
- and more.
+## The real-time transcription service, Mandarin (CPU)
+
+The FunASR real-time speech-to-text service software package not only performs real-time speech-to-text conversion, but also allows high-precision transcription text correction at the end of each sentence and outputs text with punctuation, supporting high-concurrency multiple requests.
+In order to meet the needs of different users for different scenarios, different tutorials are prepared:
+
+### Convenient Deployment Tutorial
+
+This is suitable for scenarios where there is no need to modify the service deployment SDK and the deployed model comes from ModelScope or is finetuned by the user. For detailed tutorials, please refer to [docs](./docs/SDK_tutorial_online.md)
+
+
+### Development Guide
+
+This is suitable for scenarios where there is a need to modify the service deployment SDK and the deployed model comes from ModelScope or is finetuned by the user. For detailed documentation, please refer to [docs](./docs/SDK_advanced_guide_online.md)
+
+### Technology Principles Revealed
+
+The document introduces the technology principles behind the service, recognition accuracy, computing efficiency, and core advantages: convenience, high precision, high efficiency, and long audio chain. For detailed documentation, please refer to [docs]().
+
## File Transcription Service, Mandarin (CPU)
--
Gitblit v1.9.1