From 0e294ee52f54d085bdc1220692b6582d14a20bfd Mon Sep 17 00:00:00 2001
From: 雾聪 <wucong.lyb@alibaba-inc.com>
Date: 星期四, 01 二月 2024 16:18:29 +0800
Subject: [PATCH] rm quick_start_zh.md

---
 /dev/null |  164 ------------------------------------------------------
 1 files changed, 0 insertions(+), 164 deletions(-)

diff --git a/funasr/quick_start.md b/funasr/quick_start.md
deleted file mode 100644
index 939c521..0000000
--- a/funasr/quick_start.md
+++ /dev/null
@@ -1,164 +0,0 @@
-([绠�浣撲腑鏂嘳(./quick_start_zh.md)|English)
-
-# Quick Start
-
-You can use FunASR in the following ways:
-
-- Service Deployment SDK
-- Industrial model egs
-- Academic model egs
-
-## Service Deployment SDK
-
-### Python version Example
-Supports real-time streaming speech recognition, uses non-streaming models for error correction, and outputs text with punctuation. Currently, only single client is supported. For multi-concurrency, please refer to the C++ version service deployment SDK below.
-
-#### Server Deployment
-
-```shell
-cd runtime/python/websocket
-python funasr_wss_server.py --port 10095
-```
-
-#### Client Testing
-
-```shell
-python funasr_wss_client.py --host "127.0.0.1" --port 10095 --mode 2pass --chunk_size "5,10,5"
-```
-
-For more examples, please refer to [docs](../runtime/python/websocket/README.md).
-
-### Service Deployment Software
-
-Both high-precision, high-efficiency, and high-concurrency file transcription, as well as low-latency real-time speech recognition, are supported. It also supports Docker deployment and multiple concurrent requests.
-
-##### Docker Installation (optional)
-###### If you have already installed Docker, skip this step.
-
-```shell
-curl -O https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/shell/install_docker.sh;
-sudo bash install_docker.sh
-```
-
-##### Real-time Speech Recognition Service Deployment
-
-###### Docker Image Download and Launch
-Use the following command to pull and launch the FunASR software package Docker image锛圼Get the latest image version](https://github.com/alibaba-damo-academy/FunASR/blob/main/runtime/docs/SDK_advanced_guide_online.md)锛夛細
-
-```shell
-sudo docker pull \
-  registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-online-cpu-0.1.6
-mkdir -p ./funasr-runtime-resources/models
-sudo docker run -p 10096:10095 -it --privileged=true \
-  -v $PWD/funasr-runtime-resources/models:/workspace/models \
-  registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-online-cpu-0.1.6
-```
-
-###### Server Start
-
-After Docker is started, start the funasr-wss-server-2pass service program:
-
-```shell
-cd FunASR/runtime
-nohup bash run_server_2pass.sh \
-  --download-model-dir /workspace/models \
-  --vad-dir damo/speech_fsmn_vad_zh-cn-16k-common-onnx \
-  --model-dir damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-onnx  \
-  --online-model-dir damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-online-onnx  \
-  --punc-dir damo/punc_ct-transformer_zh-cn-common-vad_realtime-vocab272727-onnx \
-  --itn-dir thuduj12/fst_itn_zh \
-  --hotword /workspace/models/hotwords.txt > log.txt 2>&1 &
-
-# If you want to disable SSL, add the parameter: --certfile 0
-# If you want to deploy with a timestamp or nn hotword model, please set --model-dir to the corresponding model:
-#   damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-onnx (timestamp)
-#   damo/speech_paraformer-large-contextual_asr_nat-zh-cn-16k-common-vocab8404-onnx (nn hotword)
-# If you want to load hotwords on the server side, please configure the hotwords in the host file ./funasr-runtime-resources/models/hotwords.txt (docker mapping address is /workspace/models/hotwords.txt):
-#   One hotword per line, format (hotword weight): Alibaba 20
-```
-
-###### Client Testing
-Testing [samples](https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/sample/funasr_samples.tar.gz)
-
-```shell
-python3 funasr_wss_client.py --host "127.0.0.1" --port 10096 --mode 2pass
-```
-For more examples, please refer to [docs](https://github.com/alibaba-damo-academy/FunASR/blob/main/runtime/docs/SDK_advanced_guide_online.md)
-
-
-#### File Transcription Service, Mandarin (CPU)
-
-###### Docker Image Download and Launch
-Use the following command to pull and launch the FunASR software package Docker image锛圼Get the latest image version](https://github.com/alibaba-damo-academy/FunASR/blob/main/runtime/docs/SDK_advanced_guide_offline.md)锛夛細
-
-```shell
-sudo docker pull \
-  registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-cpu-0.4.1
-mkdir -p ./funasr-runtime-resources/models
-sudo docker run -p 10095:10095 -it --privileged=true \
-  -v $PWD/funasr-runtime-resources/models:/workspace/models \
-  registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-cpu-0.4.1
-```
-
-###### Server Start
-
-After Docker is started, start the funasr-wss-server service program:
-
-```shell
-cd FunASR/runtime
-nohup bash run_server.sh \
-  --download-model-dir /workspace/models \
-  --vad-dir damo/speech_fsmn_vad_zh-cn-16k-common-onnx \
-  --model-dir damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-onnx  \
-  --punc-dir damo/punc_ct-transformer_cn-en-common-vocab471067-large-onnx \
-  --lm-dir damo/speech_ngram_lm_zh-cn-ai-wesp-fst \
-  --itn-dir thuduj12/fst_itn_zh \
-  --hotword /workspace/models/hotwords.txt > log.txt 2>&1 &
-
-# If you want to disable SSL, add the parameter: --certfile 0
-# If you want to use timestamp or nn hotword models for deployment, please set --model-dir to the corresponding model:
-#   damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-onnx (timestamp)
-#   damo/speech_paraformer-large-contextual_asr_nat-zh-cn-16k-common-vocab8404-onnx (nn hotword)
-# If you want to load hotwords on the server side, please configure the hotwords in the host machine file ./funasr-runtime-resources/models/hotwords.txt (docker mapping address is /workspace/models/hotwords.txt):
-#   One hotword per line, format (hotword weight): Alibaba 20
-```
-
-##### Client Testing
-
-Testing [samples](https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/sample/funasr_samples.tar.gz)
-```shell
-python3 funasr_wss_client.py --host "127.0.0.1" --port 10095 --mode offline --audio_in "../audio/asr_example.wav"
-```
-
-For more examples, please refer to [docs](https://github.com/alibaba-damo-academy/FunASR/blob/main/runtime/docs/SDK_advanced_guide_offline.md)
-
-
-## Industrial Model Egs
-
-If you want to use the pre-trained industrial models in ModelScope for inference or fine-tuning training, you can refer to the following command:
-
-```python
-from modelscope.pipelines import pipeline
-from modelscope.utils.constant import Tasks
-
-inference_pipeline = pipeline(
-    task=Tasks.auto_speech_recognition,
-    model='damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch',
-)
-
-rec_result = inference_pipeline(audio_in='https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.wav')
-print(rec_result)
-# {'text': '娆㈣繋澶у鏉ヤ綋楠岃揪鎽╅櫌鎺ㄥ嚭鐨勮闊宠瘑鍒ā鍨�'}
-```
-
-More examples could be found in [docs](https://alibaba-damo-academy.github.io/FunASR/en/modelscope_pipeline/quick_start.html)
-
-## Academic model egs
-
-If you want to train from scratch, usually for academic models, you can start training and inference with the following command:
-
-```shell
-cd egs/aishell/paraformer
-. ./run.sh --CUDA_VISIBLE_DEVICES="0,1" --gpu_num=2
-```
-More examples could be found in [docs](https://alibaba-damo-academy.github.io/FunASR/en/modelscope_pipeline/quick_start.html)
diff --git a/funasr/quick_start_zh.md b/funasr/quick_start_zh.md
deleted file mode 100644
index 59aaeea..0000000
--- a/funasr/quick_start_zh.md
+++ /dev/null
@@ -1,164 +0,0 @@
-(绠�浣撲腑鏂噟[English](./quick_start.md))
-
-<a name="蹇�熷紑濮�"></a>
-## 蹇�熷紑濮�
-
-鎮ㄥ彲浠ラ�氳繃濡備笅鍑犵鏂瑰紡浣跨敤FunASR鍔熻兘:
-
-- 鏈嶅姟閮ㄧ讲绀惧尯杞欢鍖�
-- 宸ヤ笟妯″瀷egs
-- 瀛︽湳妯″瀷egs
-
-### 鏈嶅姟閮ㄧ讲绀惧尯杞欢鍖�
-
-#### python鐗堟湰绀轰緥
-
-鏀寔瀹炴椂娴佸紡璇煶璇嗗埆锛屽苟涓斾細鐢ㄩ潪娴佸紡妯″瀷杩涜绾犻敊锛岃緭鍑烘枃鏈甫鏈夋爣鐐广�傜洰鍓嶅彧鏀寔鍗曚釜client锛屽闇�澶氬苟鍙戣鍙傝�冧笅鏂筩++鐗堟湰鏈嶅姟閮ㄧ讲SDK
-
-##### 鏈嶅姟绔儴缃�
-```shell
-cd runtime/python/websocket
-python funasr_wss_server.py --port 10095
-```
-
-##### 瀹㈡埛绔祴璇�
-```shell
-python funasr_wss_client.py --host "127.0.0.1" --port 10095 --mode 2pass --chunk_size "5,10,5"
-#python funasr_wss_client.py --host "127.0.0.1" --port 10095 --mode 2pass --chunk_size "8,8,4" --audio_in "./data/wav.scp"
-```
-鏇村渚嬪瓙鍙互鍙傝�冿紙[鐐瑰嚮姝ゅ](../runtime/python/websocket/README.md)锛�
-
-<a name="cpp鐗堟湰绀轰緥"></a>
-#### 鏈嶅姟閮ㄧ讲杞欢鍖�
-
-鏃㈠彲浠ヨ繘琛岄珮绮惧害銆侀珮鏁堢巼涓庨珮骞跺彂鐨勬枃浠惰浆鍐欙紝涔熷彲浠ヨ繘琛屼綆寤舵椂鐨勫疄鏃惰闊冲惉鍐欍�傛敮鎸丏ocker鍖栭儴缃诧紝澶氳矾璇锋眰銆�
-
-##### 鍑嗗宸ヤ綔锛歞ocker瀹夎锛堝彲閫夛級
-###### 濡傛灉鎮ㄥ凡瀹夎docker锛屽拷鐣ユ湰姝ラ
-
-```shell
-curl -O https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/shell/install_docker.sh;
-sudo bash install_docker.sh
-```
-
-##### 瀹炴椂璇煶鍚啓鏈嶅姟閮ㄧ讲
-
-###### docker闀滃儚涓嬭浇涓庡惎鍔�
-閫氳繃涓嬭堪鍛戒护鎷夊彇骞跺惎鍔‵unASR杞欢鍖卍ocker闀滃儚锛圼鑾峰彇鏈�鏂伴暅鍍忕増鏈琞(https://github.com/alibaba-damo-academy/FunASR/blob/main/runtime/docs/SDK_advanced_guide_online_zh.md)锛夛細
-
-```shell
-sudo docker pull \
-  registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-online-cpu-0.1.6
-mkdir -p ./funasr-runtime-resources/models
-sudo docker run -p 10096:10095 -it --privileged=true \
-  -v $PWD/funasr-runtime-resources/models:/workspace/models \
-  registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-online-cpu-0.1.6
-```
-
-###### 鏈嶅姟绔惎鍔�
-docker鍚姩涔嬪悗锛屽惎鍔� funasr-wss-server-2pass鏈嶅姟绋嬪簭锛�
-```shell
-cd FunASR/runtime
-nohup bash run_server_2pass.sh \
-  --download-model-dir /workspace/models \
-  --vad-dir damo/speech_fsmn_vad_zh-cn-16k-common-onnx \
-  --model-dir damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-onnx  \
-  --online-model-dir damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-online-onnx  \
-  --punc-dir damo/punc_ct-transformer_zh-cn-common-vad_realtime-vocab272727-onnx \
-  --itn-dir thuduj12/fst_itn_zh \
-  --hotword /workspace/models/hotwords.txt > log.txt 2>&1 &
-
-# 濡傛灉鎮ㄦ兂鍏抽棴ssl锛屽鍔犲弬鏁帮細--certfile 0
-# 濡傛灉鎮ㄦ兂浣跨敤鏃堕棿鎴虫垨鑰卬n鐑瘝妯″瀷杩涜閮ㄧ讲锛岃璁剧疆--model-dir涓哄搴旀ā鍨嬶細
-#   damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-onnx锛堟椂闂存埑锛�
-#   damo/speech_paraformer-large-contextual_asr_nat-zh-cn-16k-common-vocab8404-onnx锛坣n鐑瘝锛�
-# 濡傛灉鎮ㄦ兂鍦ㄦ湇鍔$鍔犺浇鐑瘝锛岃鍦ㄥ涓绘満鏂囦欢./funasr-runtime-resources/models/hotwords.txt閰嶇疆鐑瘝锛坉ocker鏄犲皠鍦板潃涓�/workspace/models/hotwords.txt锛�:
-#   姣忚涓�涓儹璇嶏紝鏍煎紡(鐑瘝 鏉冮噸)锛氶樋閲屽反宸� 20
-```
-
-##### 瀹㈡埛绔祴璇曚笌浣跨敤
-瀹㈡埛绔祴璇曪紙[samples](https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/sample/funasr_samples.tar.gz)锛�
-
-```shell
-python3 funasr_wss_client.py --host "127.0.0.1" --port 10096 --mode 2pass
-```
-鏇村渚嬪瓙鍙傝�冿紙[鐐瑰嚮姝ゅ](https://github.com/alibaba-damo-academy/FunASR/blob/main/runtime/docs/SDK_advanced_guide_online_zh.md)锛�
-
-##### 绂荤嚎鏂囦欢杞啓鏈嶅姟閮ㄧ讲
-
-###### 闀滃儚鍚姩
-
-閫氳繃涓嬭堪鍛戒护鎷夊彇骞跺惎鍔‵unASR杞欢鍖卍ocker闀滃儚锛圼鑾峰彇鏈�鏂伴暅鍍忕増鏈琞(https://github.com/alibaba-damo-academy/FunASR/blob/main/runtime/docs/SDK_advanced_guide_offline_zh.md)锛夛細
-
-```shell
-sudo docker pull \
-  registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-cpu-0.4.1
-mkdir -p ./funasr-runtime-resources/models
-sudo docker run -p 10095:10095 -it --privileged=true \
-  -v $PWD/funasr-runtime-resources/models:/workspace/models \
-  registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-cpu-0.4.1
-```
-
-###### 鏈嶅姟绔惎鍔�
-
-docker鍚姩涔嬪悗锛屽惎鍔� funasr-wss-server鏈嶅姟绋嬪簭锛�
-```shell
-cd FunASR/runtime
-nohup bash run_server.sh \
-  --download-model-dir /workspace/models \
-  --vad-dir damo/speech_fsmn_vad_zh-cn-16k-common-onnx \
-  --model-dir damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-onnx  \
-  --punc-dir damo/punc_ct-transformer_cn-en-common-vocab471067-large-onnx \
-  --lm-dir damo/speech_ngram_lm_zh-cn-ai-wesp-fst \
-  --itn-dir thuduj12/fst_itn_zh \
-  --hotword /workspace/models/hotwords.txt > log.txt 2>&1 &
-
-# 濡傛灉鎮ㄦ兂鍏抽棴ssl锛屽鍔犲弬鏁帮細--certfile 0
-# 濡傛灉鎮ㄦ兂浣跨敤鏃堕棿鎴虫垨鑰卬n鐑瘝妯″瀷杩涜閮ㄧ讲锛岃璁剧疆--model-dir涓哄搴旀ā鍨嬶細
-#   damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-onnx锛堟椂闂存埑锛�
-#   damo/speech_paraformer-large-contextual_asr_nat-zh-cn-16k-common-vocab8404-onnx锛坣n鐑瘝锛�
-# 濡傛灉鎮ㄦ兂鍦ㄦ湇鍔$鍔犺浇鐑瘝锛岃鍦ㄥ涓绘満鏂囦欢./funasr-runtime-resources/models/hotwords.txt閰嶇疆鐑瘝锛坉ocker鏄犲皠鍦板潃涓�/workspace/models/hotwords.txt锛�:
-#   姣忚涓�涓儹璇嶏紝鏍煎紡(鐑瘝 鏉冮噸)锛氶樋閲屽反宸� 20
-```
-
-###### 瀹㈡埛绔祴璇�
-瀹㈡埛绔祴璇曪紙[samples](https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/sample/funasr_samples.tar.gz)锛�
-```shell
-python3 funasr_wss_client.py --host "127.0.0.1" --port 10095 --mode offline --audio_in "../audio/asr_example.wav"
-```
-鏇村渚嬪瓙鍙傝�冿紙[鐐瑰嚮姝ゅ](https://github.com/alibaba-damo-academy/FunASR/blob/main/runtime/docs/SDK_advanced_guide_offline_zh.md)锛�
-
-
-
-### 宸ヤ笟妯″瀷egs
-
-濡傛灉鎮ㄥ笇鏈涗娇鐢∕odelScope涓璁粌濂界殑宸ヤ笟妯″瀷锛岃繘琛屾帹鐞嗘垨鑰呭井璋冭缁冿紝鎮ㄥ彲浠ュ弬鑰冧笅闈㈡寚浠わ細
-
-
-```python
-from modelscope.pipelines import pipeline
-from modelscope.utils.constant import Tasks
-
-inference_pipeline = pipeline(
-    task=Tasks.auto_speech_recognition,
-    model='damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch',
-)
-
-rec_result = inference_pipeline(audio_in='https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.wav')
-print(rec_result)
-# {'text': '娆㈣繋澶у鏉ヤ綋楠岃揪鎽╅櫌鎺ㄥ嚭鐨勮闊宠瘑鍒ā鍨�'}
-```
-
-鏇村渚嬪瓙鍙互鍙傝�冿紙[鐐瑰嚮姝ゅ](https://alibaba-damo-academy.github.io/FunASR/en/modelscope_pipeline/quick_start.html)锛�
-
-
-### 瀛︽湳妯″瀷egs
-
-濡傛灉鎮ㄥ笇鏈涗粠澶村紑濮嬭缁冿紝閫氬父涓哄鏈ā鍨嬶紝鎮ㄥ彲浠ラ�氳繃涓嬮潰鐨勬寚浠ゅ惎鍔ㄨ缁冧笌鎺ㄧ悊锛�
-
-```shell
-cd egs/aishell/paraformer
-. ./run.sh --CUDA_VISIBLE_DEVICES="0,1" --gpu_num=2
-```
-
-鏇村渚嬪瓙鍙互鍙傝�冿紙[鐐瑰嚮姝ゅ](https://alibaba-damo-academy.github.io/FunASR/en/academic_recipe/asr_recipe.html)锛�

--
Gitblit v1.9.1