| | |
| | | |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------:|:--------------------------------:|:----------:| |
| | | | <nobr>paraformer-zh ([⭐](https://www.modelscope.cn/models/damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary) [🤗]() )</nobr> | speech recognition, with timestamps, non-streaming | 60000 hours, Mandarin | 220M | |
| | | | <nobr>paraformer-zh-spk ( [⭐](https://modelscope.cn/models/damo/speech_paraformer-large-vad-punc-spk_asr_nat-zh-cn/summary) [🤗]() )</nobr> | speech recognition with speaker diarization, with timestamps, non-streaming | 60000 hours, Mandarin | 220M | |
| | | | <nobr>paraformer-zh-online ( [⭐](https://modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-online/summary) [🤗]() )</nobr> | speech recognition, non-streaming | 60000 hours, Mandarin | 220M | |
| | | | <nobr>paraformer-zh-online ( [⭐](https://modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-online/summary) [🤗]() )</nobr> | speech recognition, streaming | 60000 hours, Mandarin | 220M | |
| | | | <nobr>paraformer-en ( [⭐](https://www.modelscope.cn/models/damo/speech_paraformer-large-vad-punc_asr_nat-en-16k-common-vocab10020/summary) [🤗]() )</nobr> | speech recognition, with timestamps, non-streaming | 50000 hours, English | 220M | |
| | | | <nobr>paraformer-en-spk ([🤗]() [⭐]() )</nobr> | speech recognition with speaker diarization, non-streaming | 50000 hours, English | 220M | |
| | | | <nobr>conformer-en ( [⭐](https://modelscope.cn/models/damo/speech_conformer_asr-en-16k-vocab4199-pytorch/summary) [🤗]() )</nobr> | speech recognition, non-streaming | 50000 hours, English | 220M | |
| | |
| | | FunASR supports inference and fine-tuning of models trained on industrial data for tens of thousands of hours. For more details, please refer to [modelscope_egs](https://alibaba-damo-academy.github.io/FunASR/en/modelscope_pipeline/quick_start.html). It also supports training and fine-tuning of models on academic standard datasets. For more information, please refer to [egs](https://alibaba-damo-academy.github.io/FunASR/en/academic_recipe/asr_recipe.html). |
| | | |
| | | Below is a quick start tutorial. Test audio files ([Mandarin](https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/vad_example.wav), [English]()). |
| | | |
| | | ### Command-line usage |
| | | |
| | | ```shell |
| | | funasr --model paraformer-zh asr_example_zh.wav |
| | | ``` |
| | | |
| | | Notes: Support recognition of single audio file, as well as file list in Kaldi-style wav.scp format: `wav_id wav_pat` |
| | | |
| | | ### Speech Recognition (Non-streaming) |
| | | ```python |
| | | from funasr import infer |
| | |
| | | For more detailed information, please refer to the [service deployment documentation](runtime/readme.md). |
| | | |
| | | |
| | | <a name="Community Communication"></a> |
| | | <a name="contact"></a> |
| | | ## Community Communication |
| | | If you encounter problems in use, you can directly raise Issues on the github page. |
| | | |