| | |
| | | [//]: # (<div align="left"><img src="docs/images/funasr_logo.jpg" width="400"/></div>) |
| | | |
| | | ([简体中文](./README_zh.md)|English) |
| | | |
| | | # FunASR: A Fundamental End-to-End Speech Recognition Toolkit |
| | | <p align="left"> |
| | | <a href=""><img src="https://img.shields.io/badge/OS-Linux%2C%20Win%2C%20Mac-brightgreen.svg"></a> |
| | |
| | | [**News**](https://github.com/alibaba-damo-academy/FunASR#whats-new) |
| | | | [**Highlights**](#highlights) |
| | | | [**Installation**](#installation) |
| | | | [**Docs**](https://alibaba-damo-academy.github.io/FunASR/en/index.html) |
| | | | [**Tutorial**](https://github.com/alibaba-damo-academy/FunASR/wiki#funasr%E7%94%A8%E6%88%B7%E6%89%8B%E5%86%8C) |
| | | | [**Papers**](https://github.com/alibaba-damo-academy/FunASR#citations) |
| | | | [**Runtime**](https://github.com/alibaba-damo-academy/FunASR/tree/main/funasr/runtime) |
| | | | [**Model Zoo**](https://github.com/alibaba-damo-academy/FunASR/blob/main/docs/modelscope_models.md) |
| | | | [**Quick Start**](#quick-start) |
| | | | [**Runtime**](./funasr/runtime/readme.md) |
| | | | [**Model Zoo**](./docs/model_zoo/modelscope_models.md) |
| | | | [**Contact**](#contact) |
| | | | |
| | | [**M2MET2.0 Guidence_CN**](https://alibaba-damo-academy.github.io/FunASR/m2met2_cn/index.html) |
| | | | [**M2MET2.0 Guidence_EN**](https://alibaba-damo-academy.github.io/FunASR/m2met2/index.html) |
| | | |
| | | ## Multi-Channel Multi-Party Meeting Transcription 2.0 (M2MET2.0) Challenge |
| | | We are pleased to announce that the M2MeT2.0 challenge will be held in the near future. The baseline system is conducted on FunASR and is provided as a receipe of AliMeeting corpus. For more details you can see the guidence of M2MET2.0 ([CN](https://alibaba-damo-academy.github.io/FunASR/m2met2_cn/index.html)/[EN](https://alibaba-damo-academy.github.io/FunASR/m2met2/index.html)). |
| | | |
| | | <a name="highlights"></a> |
| | | ## Highlights |
| | | - FunASR is a fundamental speech recognition toolkit that offers a variety of features, including speech recognition (ASR), Voice Activity Detection (VAD), Punctuation Restoration, Language Models, Speaker Verification, Speaker Diarization and multi-talker ASR. FunASR provides convenient scripts and tutorials, supporting inference and fine-tuning of pre-trained models. |
| | | - We have released a vast collection of academic and industrial pretrained models on the [ModelScope](https://www.modelscope.cn/models?page=1&tasks=auto-speech-recognition), which can be accessed through our [Model Zoo](https://github.com/alibaba-damo-academy/FunASR/blob/main/docs/model_zoo/modelscope_models.md). The representative [Paraformer-large](https://www.modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary), a non-autoregressive end-to-end speech recognition model, has the advantages of high accuracy, high efficiency, and convenient deployment, supporting the rapid construction of speech recognition services. For more details on service deployment, please refer to the [service deployment document](funasr/runtime/readme_cn.md). |
| | | |
| | | |
| | | <a name="whats-new"></a> |
| | | ## What's new: |
| | | |
| | | For the release notes, please ref to [news](https://github.com/alibaba-damo-academy/FunASR/releases) |
| | | - 2023/07/17: BAT released a low-latency and low-memory-consumption RNN-T model. For more details, please refer to ([BAT](egs/aishell/bat)). |
| | | - 2023/07/03: The CPU version of the Chinese offline file transcription service has been released with one-click deployment. For more details, please refer to ([Deployment documentation](funasr/runtime/docs/SDK_tutorial.md)). |
| | | - 2023/06/26: ASRU2023 Multi-Channel Multi-Party Meeting Transcription Challenge 2.0 completed the competition and announced the results. For more details, please refer to ([M2MeT2.0](https://alibaba-damo-academy.github.io/FunASR/m2met2/index.html)). |
| | | |
| | | ## Highlights |
| | | - FunASR supports speech recognition(ASR), Multi-talker ASR, Voice Activity Detection(VAD), Punctuation Restoration, Language Models, Speaker Verification and Speaker diarization. |
| | | - We have released large number of academic and industrial pretrained models on [ModelScope](https://www.modelscope.cn/models?page=1&tasks=auto-speech-recognition) |
| | | - The pretrained model [Paraformer-large](https://www.modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary) obtains the best performance on many tasks in [SpeechIO leaderboard](https://github.com/SpeechColab/Leaderboard) |
| | | - FunASR supplies a easy-to-use pipeline to finetune pretrained models from [ModelScope](https://www.modelscope.cn/models?page=1&tasks=auto-speech-recognition) |
| | | - Compared to [Espnet](https://github.com/espnet/espnet) framework, the training speed of large-scale datasets in FunASR is much faster owning to the optimized dataloader. |
| | | |
| | | <a name="Installation"></a> |
| | | ## Installation |
| | | |
| | | Install from pip |
| | | ```shell |
| | | pip install -U funasr |
| | | # For the users in China, you could install with the command: |
| | | # pip install -U funasr -i https://mirror.sjtu.edu.cn/pypi/web/simple |
| | | ``` |
| | | |
| | | Or install from source code |
| | | Please ref to [installation docs](https://alibaba-damo-academy.github.io/FunASR/en/installation/installation.html) |
| | | |
| | | ## Deployment Service |
| | | |
| | | FunASR supports pre-trained or further fine-tuned models for deployment as a service. The CPU version of the Chinese offline file conversion service has been released, details can be found in [docs](funasr/runtime/docs/SDK_tutorial.md). More detailed information about service deployment can be found in the [deployment roadmap](funasr/runtime/readme_cn.md). |
| | | |
| | | |
| | | ``` sh |
| | | git clone https://github.com/alibaba/FunASR.git && cd FunASR |
| | | pip install -e ./ |
| | | # For the users in China, you could install with the command: |
| | | # pip install -e ./ -i https://mirror.sjtu.edu.cn/pypi/web/simple |
| | | <a name="quick-start"></a> |
| | | ## Quick Start |
| | | |
| | | ``` |
| | | If you want to use the pretrained models in ModelScope, you should install the modelscope: |
| | | FunASR supports inference and fine-tuning of models trained on industrial datasets of tens of thousands of hours. For more details, please refer to ([modelscope_egs](https://alibaba-damo-academy.github.io/FunASR/en/modelscope_pipeline/quick_start.html)). It also supports training and fine-tuning of models on academic standard datasets. For more details, please refer to([egs](https://alibaba-damo-academy.github.io/FunASR/en/academic_recipe/asr_recipe.html)). The models include speech recognition (ASR), speech activity detection (VAD), punctuation recovery, language model, speaker verification, speaker separation, and multi-party conversation speech recognition. For a detailed list of models, please refer to the [Model Zoo](https://github.com/alibaba-damo-academy/FunASR/blob/main/docs/model_zoo/modelscope_models.md): |
| | | |
| | | ```shell |
| | | pip install -U modelscope |
| | | # For the users in China, you could install with the command: |
| | | # pip install -U modelscope -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html -i https://mirror.sjtu.edu.cn/pypi/web/simple |
| | | ``` |
| | | |
| | | For more details, please ref to [installation](https://github.com/alibaba-damo-academy/FunASR/wiki) |
| | | |
| | | [//]: # () |
| | | [//]: # (## Usage) |
| | | |
| | | [//]: # (For users who are new to FunASR and ModelScope, please refer to FunASR Docs([CN](https://alibaba-damo-academy.github.io/FunASR/cn/index.html) / [EN](https://alibaba-damo-academy.github.io/FunASR/en/index.html))) |
| | | |
| | | <a name="contact"></a> |
| | | ## Contact |
| | | |
| | | If you have any questions about FunASR, please contact us by |
| | |
| | | |
| | | ## Contributors |
| | | |
| | | | <div align="left"><img src="docs/images/damo.png" width="180"/> | <div align="left"><img src="docs/images/nwpu.png" width="260"/> | <img src="docs/images/DeepScience.png" width="200"/> </div> | |
| | | |:---------------------------------------------------------------:|:---------------------------------------------------------------:|:-----------------------------------------------------------:| |
| | | | <div align="left"><img src="docs/images/damo.png" width="180"/> | <div align="left"><img src="docs/images/nwpu.png" width="260"/> | <img src="docs/images/China_Telecom.png" width="200"/> </div> | <img src="docs/images/RapidAI.png" width="200"/> </div> | <img src="docs/images/aihealthx.png" width="200"/> </div> | |
| | | |:---------------------------------------------------------------:|:---------------------------------------------------------------:|:--------------------------------------------------------------:|:-------------------------------------------------------:|:-----------------------------------------------------------:| |
| | | |
| | | ## Acknowledge |
| | | |
| | | 1. We borrowed a lot of code from [Kaldi](http://kaldi-asr.org/) for data preparation. |
| | | 2. We borrowed a lot of code from [ESPnet](https://github.com/espnet/espnet). FunASR follows up the training and finetuning pipelines of ESPnet. |
| | | 3. We referred [Wenet](https://github.com/wenet-e2e/wenet) for building dataloader for large scale data training. |
| | | 4. We acknowledge [DeepScience](https://www.deepscience.cn) for contributing the grpc service. |
| | | |
| | | he contributor list can be found in [contributors]((./Acknowledge)) |
| | | ## License |
| | | This project is licensed under the [The MIT License](https://opensource.org/licenses/MIT). FunASR also contains various third-party components and some code modified from other repos under other open source licenses. |
| | | The use of pretraining model is subject to [model licencs](./MODEL_LICENSE) |
| | | |
| | | |
| | | ## Citations |
| | | |
| | | ``` bibtex |
| | | @inproceedings{gao2022paraformer, |
| | | title={Paraformer: Fast and Accurate Parallel Transformer for Non-autoregressive End-to-End Speech Recognition}, |
| | | author={Gao, Zhifu and Zhang, Shiliang and McLoughlin, Ian and Yan, Zhijie}, |
| | | @inproceedings{gao2023funasr, |
| | | author={Zhifu Gao and Zerui Li and Jiaming Wang and Haoneng Luo and Xian Shi and Mengzhe Chen and Yabin Li and Lingyun Zuo and Zhihao Du and Zhangyu Xiao and Shiliang Zhang}, |
| | | title={FunASR: A Fundamental End-to-End Speech Recognition Toolkit}, |
| | | year={2023}, |
| | | booktitle={INTERSPEECH}, |
| | | year={2022} |
| | | } |
| | | @inproceedings{gao2020universal, |
| | | title={Universal ASR: Unifying Streaming and Non-Streaming ASR Using a Single Encoder-Decoder Model}, |
| | | author={Gao, Zhifu and Zhang, Shiliang and Lei, Ming and McLoughlin, Ian}, |
| | | booktitle={arXiv preprint arXiv:2010.14099}, |
| | | year={2020} |
| | | } |
| | | @inproceedings{Shi2023AchievingTP, |
| | | title={Achieving Timestamp Prediction While Recognizing with Non-Autoregressive End-to-End ASR Model}, |
| | | author={Xian Shi and Yanni Chen and Shiliang Zhang and Zhijie Yan}, |
| | | booktitle={arXiv preprint arXiv:2301.12343} |
| | | year={2023} |
| | | @inproceedings{gao22b_interspeech, |
| | | author={Zhifu Gao and ShiLiang Zhang and Ian McLoughlin and Zhijie Yan}, |
| | | title={{Paraformer: Fast and Accurate Parallel Transformer for Non-autoregressive End-to-End Speech Recognition}}, |
| | | year=2022, |
| | | booktitle={Proc. Interspeech 2022}, |
| | | pages={2063--2067}, |
| | | doi={10.21437/Interspeech.2022-9996} |
| | | } |
| | | ``` |