shixian.shi
2023-11-23 c9f1b4e8a2e903f74de20d019e70307c26e93c3e
README_zh.md
@@ -54,15 +54,15 @@
|                                                                              模型名字                                                                               |        任务详情        |     训练数据     | 参数量  |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------:|:------------:|:----:|
|     paraformer-zh ([⭐](https://www.modelscope.cn/models/damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary)  [🤗]() )     |  语音识别,带时间戳输出,非实时   |  60000小时,中文  | 220M |
|                 paraformer-zh-spk ( [⭐](https://modelscope.cn/models/damo/speech_paraformer-large-vad-punc-spk_asr_nat-zh-cn/summary)  [🤗]() )                 | 分角色语音识别,带时间戳输出,非实时 |  60000小时,中文  | 220M |
|        paraformer-zh-online ( [⭐](https://modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-online/summary) [🤗]() )         |      语音识别,实时       |  60000小时,中文  | 220M |
|          paraformer-en ( [⭐](https://www.modelscope.cn/models/damo/speech_paraformer-large-vad-punc_asr_nat-en-16k-common-vocab10020/summary) [🤗]() )          | 语音识别,非实时 |  50000小时,英文  | 220M |
|                                                                paraformer-en-spk ([🤗]() [⭐]() )                                                                |      语音识别,非实时      |  50000小时,英文  | 220M |
|                      conformer-en ( [⭐](https://modelscope.cn/models/damo/speech_conformer_asr-en-16k-vocab4199-pytorch/summary) [🤗]() )                       |      语音识别,非实时      |  50000小时,英文  | 220M |
|                      ct-punc ( [⭐](https://modelscope.cn/models/damo/punc_ct-transformer_cn-en-common-vocab471067-large/summary) [🤗]() )                       |      标点恢复      |  100M,中文与英文  | 1.1G |
|                           fsmn-vad ( [⭐](https://modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/summary) [🤗]() )                           |     语音端点检测,实时      | 5000小时,中文与英文 | 0.4M |
|                           fa-zh ( [⭐](https://modelscope.cn/models/damo/speech_timestamp_prediction-v1-16k-offline/summary) [🤗]() )                            |   字级别时间戳预测         |  50000小时,中文  | 38M  |
|     paraformer-zh <br> ([⭐](https://www.modelscope.cn/models/damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary)  [🤗]() )     |  语音识别,带时间戳输出,非实时   |  60000小时,中文  | 220M |
|                 paraformer-zh-spk <br> ( [⭐](https://modelscope.cn/models/damo/speech_paraformer-large-vad-punc-spk_asr_nat-zh-cn/summary)  [🤗]() )                 | 分角色语音识别,带时间戳输出,非实时 |  60000小时,中文  | 220M |
|        paraformer-zh-online <br> ( [⭐](https://modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-online/summary) [🤗]() )         |      语音识别,实时       |  60000小时,中文  | 220M |
|          paraformer-en <br> ( [⭐](https://www.modelscope.cn/models/damo/speech_paraformer-large-vad-punc_asr_nat-en-16k-common-vocab10020/summary) [🤗]() )          | 语音识别,非实时 |  50000小时,英文  | 220M |
|                                                                paraformer-en-spk <br> ([⭐]() [🤗]() )                                                                |      语音识别,非实时      |  50000小时,英文  | 220M |
|                      conformer-en <br> ( [⭐](https://modelscope.cn/models/damo/speech_conformer_asr-en-16k-vocab4199-pytorch/summary) [🤗]() )                       |      语音识别,非实时      |  50000小时,英文  | 220M |
|                      ct-punc <br> ( [⭐](https://modelscope.cn/models/damo/punc_ct-transformer_cn-en-common-vocab471067-large/summary) [🤗]() )                       |      标点恢复      |  100M,中文与英文  | 1.1G |
|                           fsmn-vad <br> ( [⭐](https://modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/summary) [🤗]() )                           |     语音端点检测,实时      | 5000小时,中文与英文 | 0.4M |
|                           fa-zh <br> ( [⭐](https://modelscope.cn/models/damo/speech_timestamp_prediction-v1-16k-offline/summary) [🤗]() )                            |   字级别时间戳预测         |  50000小时,中文  | 38M  |
<a name="快速开始"></a>
@@ -70,6 +70,15 @@
FunASR支持数万小时工业数据训练的模型的推理和微调,详细信息可以参阅([modelscope_egs](https://alibaba-damo-academy.github.io/FunASR/en/modelscope_pipeline/quick_start.html));也支持学术标准数据集模型的训练和微调,详细信息可以参阅([egs](https://alibaba-damo-academy.github.io/FunASR/en/academic_recipe/asr_recipe.html))。
下面为快速上手教程,测试音频([中文](https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/vad_example.wav),[英文]())
### 可执行命令行
```shell
funasr --model paraformer-zh asr_example_zh.wav
```
注:支持单条音频文件识别,也支持文件列表,列表为kaldi风格wav.scp:`wav_id   wav_path`
### 非实时语音识别
```python
from funasr import infer
@@ -131,8 +140,8 @@
## 社区贡献者
| <div align="left"><img src="docs/images/nwpu.png" width="260"/> | <img src="docs/images/China_Telecom.png" width="200"/> </div>  | <img src="docs/images/RapidAI.png" width="200"/> </div> | <img src="docs/images/aihealthx.png" width="200"/> </div> | <img src="docs/images/XVERSE.png" width="250"/> </div> |
|:---------------------------------------------------------------:|:--------------------------------------------------------------:|:-------------------------------------------------------:|:-----------------------------------------------------------:|:------------------------------------------------------:|
| <div align="left"><img src="docs/images/alibaba.png" width="260"/> | <div align="left"><img src="docs/images/nwpu.png" width="260"/> | <img src="docs/images/China_Telecom.png" width="200"/> </div>  | <img src="docs/images/RapidAI.png" width="200"/> </div> | <img src="docs/images/aihealthx.png" width="200"/> </div> | <img src="docs/images/XVERSE.png" width="250"/> </div> |
|:------------------------------------------------------------------:|:---------------------------------------------------------------:|:--------------------------------------------------------------:|:-------------------------------------------------------:|:-----------------------------------------------------------:|:------------------------------------------------------:|
贡献者名单请参考([致谢名单](./Acknowledge.md))