| | |
| | | |
| | | FunASR provides a real-time speech transcription service that can be easily deployed on local or cloud servers, with the FunASR runtime-SDK as the core. It integrates the speech endpoint detection (VAD), Paraformer-large non-streaming speech recognition (ASR), Paraformer-large streaming speech recognition (ASR), punctuation (PUNC), and other related capabilities open-sourced by the speech laboratory of DAMO Academy on the Modelscope community. The software package can perform real-time speech-to-text transcription, and can also accurately transcribe text at the end of sentences for high-precision output. The output text contains punctuation and supports high-concurrency multi-channel requests. |
| | | |
| | | <img src="images/online_structure.png" width="900"/> |
| | | |
| | | | TIME | INFO | IMAGE VERSION | IMAGE ID | |
| | | |------------|-------------------------------------------------------------------------------------|-------------------------------------|--------------| |
| | | | 2023.11.09 | fix bug: without online results | funasr-runtime-sdk-online-cpu-0.1.5 | b16584b6d38b | |
| | | | 2023.11.08 | supporting server-side loading of hotwords, adaptation to runtime structure changes | funasr-runtime-sdk-online-cpu-0.1.4 | 691974017c38 | |
| | | | 2023.09.19 | supporting hotwords, timestamps, and ITN model in 2pass mode | funasr-runtime-sdk-online-cpu-0.1.2 | 7222c5319bcf | |
| | | | 2023.08.11 | addressing some known bugs (including server crashes) | funasr-runtime-sdk-online-cpu-0.1.1 | bdbdd0b27dee | |
| | | | 2023.08.07 | 1.0 released | funasr-runtime-sdk-online-cpu-0.1.0 | bdbdd0b27dee | |
| | | |
| | | ## Quick Start |
| | | ### Pull Docker Image |
| | | |
| | | Use the following command to pull and start the FunASR software package docker image: |
| | | |
| | | ### Docker install |
| | | If you have already installed Docker, ignore this step! |
| | | ```shell |
| | | sudo docker pull registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-online-cpu-0.1.3 |
| | | mkdir -p ./funasr-runtime-resources/models |
| | | sudo docker run -p 10095:10095 -it --privileged=true -v $PWD/funasr-runtime-resources/models:/workspace/models registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-online-cpu-0.1.3 |
| | | curl -O https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/shell/install_docker.sh; |
| | | sudo bash install_docker.sh |
| | | ``` |
| | | If you do not have Docker installed, please refer to [Docker Installation](https://alibaba-damo-academy.github.io/FunASR/en/installation/docker.html) |
| | | |
| | | ### Pull Docker Image |
| | | Use the following command to pull and start the FunASR software package docker image: |
| | | ```shell |
| | | sudo docker pull registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-online-cpu-0.1.5 |
| | | mkdir -p ./funasr-runtime-resources/models |
| | | sudo docker run -p 10096:10095 -it --privileged=true -v $PWD/funasr-runtime-resources/models:/workspace/models registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-online-cpu-0.1.5 |
| | | ``` |
| | | |
| | | ### Launching the Server |
| | | |
| | |
| | | ### More details about the script run_server_2pass.sh: |
| | | ```text |
| | | --download-model-dir: Model download address, download models from Modelscope by setting the model ID. |
| | | --model-dir: Modelscope model ID. |
| | | --model-dir: modelscope model ID or local model path. |
| | | --online-model-dir modelscope model ID |
| | | --quantize: True for quantized ASR model, False for non-quantized ASR model. Default is True. |
| | | --vad-dir: Modelscope model ID. |
| | | --vad-dir: modelscope model ID or local model path. |
| | | --vad-quant: True for quantized VAD model, False for non-quantized VAD model. Default is True. |
| | | --punc-dir: Modelscope model ID. |
| | | --punc-dir: modelscope model ID or local model path. |
| | | --punc-quant: True for quantized PUNC model, False for non-quantized PUNC model. Default is True. |
| | | --itn-dir modelscope model ID |
| | | --itn-dir modelscope model ID or local model path. |
| | | --port: Port number that the server listens on. Default is 10095. |
| | | --decoder-thread-num: Number of inference threads that the server starts. Default is 8. |
| | | --io-thread-num: Number of IO threads that the server starts. Default is 1. |