| | |
| | | |
| | | (简体中文|[English](./README.md)) |
| | | |
| | | # FunASR: A Fundamental End-to-End Speech Recognition Toolkit |
| | | |
| | | |
| | | [](https://github.com/Akshay090/svg-banners) |
| | | |
| | | [//]: # (# FunASR: A Fundamental End-to-End Speech Recognition Toolkit) |
| | | |
| | | [](https://pypi.org/project/funasr/) |
| | | |
| | |
| | | <a name="安装教程"></a> |
| | | ## 安装教程 |
| | | |
| | | - 安装funasr之前,确保已经安装了下面依赖环境: |
| | | ```text |
| | | python>=3.8 |
| | | torch>=1.13 |
| | | torchaudio |
| | | ``` |
| | | |
| | | - pip安装 |
| | | ```shell |
| | | pip3 install -U funasr |
| | | ``` |
| | | 或者从源代码安装 |
| | | |
| | | - 或者从源代码安装 |
| | | ``` sh |
| | | git clone https://github.com/alibaba/FunASR.git && cd FunASR |
| | | pip3 install -e ./ |
| | | ``` |
| | | 如果需要使用工业预训练模型,安装modelscope(可选) |
| | | |
| | | 如果需要使用工业预训练模型,安装modelscope与huggingface_hub(可选) |
| | | |
| | | ```shell |
| | | pip3 install -U modelscope |
| | | pip3 install -U modelscope huggingface huggingface_hub |
| | | ``` |
| | | |
| | | ## 模型仓库 |
| | |
| | | | paraformer-zh-streaming <br> ( [⭐](https://modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-online/summary) [🤗](https://huggingface.co/funasr/paraformer-zh-streaming) ) | 语音识别,实时 | 60000小时,中文 | 220M | |
| | | | paraformer-en <br> ( [⭐](https://www.modelscope.cn/models/damo/speech_paraformer-large-vad-punc_asr_nat-en-16k-common-vocab10020/summary) [🤗](https://huggingface.co/funasr/paraformer-en) ) | 语音识别,非实时 | 50000小时,英文 | 220M | |
| | | | conformer-en <br> ( [⭐](https://modelscope.cn/models/damo/speech_conformer_asr-en-16k-vocab4199-pytorch/summary) [🤗](https://huggingface.co/funasr/conformer-en) ) | 语音识别,非实时 | 50000小时,英文 | 220M | |
| | | | ct-punc <br> ( [⭐](https://modelscope.cn/models/damo/punc_ct-transformer_cn-en-common-vocab471067-large/summary) [🤗](https://huggingface.co/funasr/ct-punc) ) | 标点恢复 | 100M,中文与英文 | 1.1B | |
| | | | ct-punc <br> ( [⭐](https://modelscope.cn/models/damo/punc_ct-transformer_cn-en-common-vocab471067-large/summary) [🤗](https://huggingface.co/funasr/ct-punc) ) | 标点恢复 | 100M,中文与英文 | 290M | |
| | | | fsmn-vad <br> ( [⭐](https://modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/summary) [🤗](https://huggingface.co/funasr/fsmn-vad) ) | 语音端点检测,实时 | 5000小时,中文与英文 | 0.4M | |
| | | | fa-zh <br> ( [⭐](https://modelscope.cn/models/damo/speech_timestamp_prediction-v1-16k-offline/summary) [🤗](https://huggingface.co/funasr/fa-zh) ) | 字级别时间戳预测 | 50000小时,中文 | 38M | |
| | | | cam++ <br> ( [⭐](https://modelscope.cn/models/iic/speech_campplus_sv_zh-cn_16k-common/summary) [🤗](https://huggingface.co/funasr/campplus) ) | 说话人确认/分割 | 5000小时 | 7.2M | |
| | |
| | | |
| | | 注:`chunk_size`为流式延时配置,`[0,10,5]`表示上屏实时出字粒度为`10*60=600ms`,未来信息为`5*60=300ms`。每次推理输入为`600ms`(采样点数为`16000*0.6=960`),输出为对应文字,最后一个语音片段输入需要设置`is_final=True`来强制输出最后一个字。 |
| | | |
| | | <details><summary>更多例子</summary> |
| | | |
| | | ### 语音端点检测(非实时) |
| | | ```python |
| | | from funasr import AutoModel |
| | |
| | | res = model.generate(input=(wav_file, text_file), data_type=("sound", "text")) |
| | | print(res) |
| | | ``` |
| | | |
| | | ### 情感识别 |
| | | ```python |
| | | from funasr import AutoModel |
| | | |
| | | model = AutoModel(model="emotion2vec_plus_large") |
| | | |
| | | wav_file = f"{model.model_path}/example/test.wav" |
| | | |
| | | res = model.generate(wav_file, output_dir="./outputs", granularity="utterance", extract_embedding=False) |
| | | print(res) |
| | | ``` |
| | | |
| | | 更详细([教程文档](docs/tutorial/README_zh.md)), |
| | | 更多([模型示例](https://github.com/alibaba-damo-academy/FunASR/tree/main/examples/industrial_data_pretraining)) |
| | | |
| | | </details> |
| | | |
| | | ## 导出ONNX |
| | | ### 从命令行导出 |
| | | ```shell |