| | |
| | | |
| | | ## Starting the server |
| | | |
| | | Use the flollowing script to start the server : |
| | | ```shell |
| | | nohup bash run_server.sh \ |
| | | --download-model-dir /workspace/models \ |
| | | --vad-dir damo/speech_fsmn_vad_zh-cn-16k-common-onnx \ |
| | | --model-dir damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-onnx \ |
| | | --punc-dir damo/punc_ct-transformer_zh-cn-common-vocab272727-onnx > log.out 2>&1 & |
| | | |
| | | # If you want to close ssl,please add:--certfile 0 |
| | | # If you want to deploy the timestamp or hotword model, please set --model-dir to the corresponding model: |
| | | # speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-onnx(timestamp) |
| | | # damo/speech_paraformer-large-contextual_asr_nat-zh-cn-16k-common-vocab8404-onnx(hotword) |
| | | ``` |
| | | |
| | | More details about the script run_server.sh: |
| | | |
| | | The FunASR-wss-server supports downloading models from Modelscope. You can set the model download address (--download-model-dir, default is /workspace/models) and the model ID (--model-dir, --vad-dir, --punc-dir). Here is an example: |
| | | |
| | | ```shell |
| | |
| | | --port: Port number that the server listens on. Default is 10095. |
| | | --decoder-thread-num: Number of inference threads that the server starts. Default is 8. |
| | | --io-thread-num: Number of IO threads that the server starts. Default is 1. |
| | | --certfile <string>: SSL certificate file. Default is ../../../ssl_key/server.crt. |
| | | --keyfile <string>: SSL key file. Default is ../../../ssl_key/server.key. |
| | | --certfile <string>: SSL certificate file. Default is ../../../ssl_key/server.crt. If you want to close ssl,set "" |
| | | --keyfile <string>: SSL key file. Default is ../../../ssl_key/server.key. If you want to close ssl,set "" |
| | | ``` |
| | | |
| | | The FunASR-wss-server also supports loading models from a local path (see Preparing Model Resources for detailed instructions on preparing local model resources). Here is an example: |
| | |
| | | |
| | | ### python-client |
| | | ```shell |
| | | python wss_client_asr.py --host "127.0.0.1" --port 10095 --mode offline --audio_in "./data/wav.scp" --send_without_sleep --output_dir "./results" |
| | | python funasr_wss_client.py --host "127.0.0.1" --port 10095 --mode offline --audio_in "./data/wav.scp" --send_without_sleep --output_dir "./results" |
| | | ``` |
| | | |
| | | Introduction to command parameters: |
| | |
| | | --output_dir: the path to the recognition result output. |
| | | --ssl: whether to use SSL encryption. The default is to use SSL. |
| | | --mode: offline mode. |
| | | --hotword If am is hotword model, setting hotword: *.txt(one hotword perline) or hotwords seperate by space (could be: 阿里巴巴 达摩院) |
| | | ``` |
| | | |
| | | ### c++-client |
| | |
| | | --output_dir: the path to the recognition result output. |
| | | --ssl: whether to use SSL encryption. The default is to use SSL. |
| | | --mode: offline mode. |
| | | --hotword If am is hotword model, setting hotword: *.txt(one hotword perline) or hotwords seperate by space (could be: 阿里巴巴 达摩院) |
| | | ``` |
| | | |
| | | ### Custom client |
| | |
| | | |
| | | ```text |
| | | # First communication |
| | | {"mode": "offline", "wav_name": wav_name, "is_speaking": True} |
| | | {"mode": "offline", "wav_name": wav_name, "is_speaking": True, "hotwords": "hotword1|hotword2"} |
| | | # Send wav data |
| | | Bytes data |
| | | # Send end flag |
| | |
| | | FUNASR_RESULT result=CTTransformerInfer(punc_hanlde, txt_str.c_str(), RASR_NONE, NULL); |
| | | // Where: punc_hanlde is the return value of CTTransformerInit, txt_str is the text |
| | | ``` |
| | | See the usage example for details, [docs](https://github.com/alibaba-damo-academy/FunASR/blob/main/funasr/runtime/onnxruntime/bin/funasr-onnx-offline-punc.cpp) |
| | | See the usage example for details, [docs](https://github.com/alibaba-damo-academy/FunASR/blob/main/funasr/runtime/onnxruntime/bin/funasr-onnx-offline-punc.cpp) |