| | |
| | | 1. We borrowed a lot of code from [Kaldi](http://kaldi-asr.org/) for data preparation. |
| | | 2. We borrowed a lot of code from [ESPnet](https://github.com/espnet/espnet). FunASR follows up the training and finetuning pipelines of ESPnet. |
| | | 3. We referred [Wenet](https://github.com/wenet-e2e/wenet) for building dataloader for large scale data training. |
| | | 4. We acknowledge [DeepScience](https://www.deepscience.cn) for contributing the grpc service. |
| | | 4. We acknowledge [ChinaTelecom](https://github.com/zhuzizyf/damo-fsmn-vad-infer-httpserver) for contributing the VAD runtime. |
| | | 5. We acknowledge [DeepScience](https://www.deepscience.cn) for contributing the grpc service. |
| | | |
| | | ## License |
| | | This project is licensed under the [The MIT License](https://opensource.org/licenses/MIT). FunASR also contains various third-party components and some code modified from other repos under other open source licenses. |
| | |
| | | |
| | | |
| | | class Paraformer(): |
| | | """ |
| | | Author: Speech Lab of DAMO Academy, Alibaba Group |
| | | Paraformer: Fast and Accurate Parallel Transformer for Non-autoregressive End-to-End Speech Recognition |
| | | https://arxiv.org/abs/2206.08317 |
| | | """ |
| | | def __init__(self, model_dir: Union[str, Path] = None, |
| | | batch_size: int = 1, |
| | | device_id: Union[str, int] = "-1", |