| | |
| | | ### Install [modelscope and funasr](https://github.com/alibaba-damo-academy/FunASR#installation) |
| | | |
| | | ```shell |
| | | pip3 install torch torchaudio |
| | | pip install -U modelscope |
| | | pip install -U funasr |
| | | # pip3 install torch torchaudio |
| | | pip install -U modelscope funasr |
| | | # For the users in China, you could install with the command: |
| | | # pip install -U modelscope funasr -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html -i https://mirror.sjtu.edu.cn/pypi/web/simple |
| | | ``` |
| | | |
| | | ### Export [onnx model](https://github.com/alibaba-damo-academy/FunASR/tree/main/funasr/export) |
| | |
| | | --model-dir ./asrmodel/punc_ct-transformer_zh-cn-common-vocab272727-pytorch \ |
| | | --txt-path ./punc_example.txt |
| | | ``` |
| | | ### funasr-onnx-offline-rtf |
| | | ```shell |
| | | ./funasr-onnx-offline-rtf --thread-num <int32_t> --wav-scp <string> |
| | | [--quantize <string>] --model-dir <string> |
| | | [--] [--version] [-h] |
| | | Where: |
| | | --thread-num <int32_t> |
| | | (required) multi-thread num for rtf |
| | | --model-dir <string> |
| | | (required) the model path, which contains model.onnx, config.yaml, am.mvn |
| | | --quantize <string> |
| | | false (Default), load the model of model.onnx in model_dir. If set true, load the model of model_quant.onnx in model_dir |
| | | --wav-scp <string> |
| | | (required) wave scp path |
| | | |
| | | For example: |
| | | ./funasr-onnx-offline-rtf \ |
| | | --model-dir ./asrmodel/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch \ |
| | | --quantize true \ |
| | | --wav-scp ./aishell1_test.scp \ |
| | | --thread-num 32 |
| | | ``` |
| | | |
| | | ## Acknowledge |
| | | 1. This project is maintained by [FunASR community](https://github.com/alibaba-damo-academy/FunASR). |
| | | 2. We acknowledge [mayong](https://github.com/RapidAI/RapidASR/tree/main/cpp_onnx) for contributing the onnxruntime(cpp api). |
| | | 3. We borrowed a lot of code from [FastASR](https://github.com/chenkui164/FastASR) for audio frontend and text-postprocess. |
| | | 2. We acknowledge mayong for contributing the onnxruntime of Paraformer and CT_Transformer, [repo-asr](https://github.com/RapidAI/RapidASR/tree/main/cpp_onnx), [repo-punc](https://github.com/RapidAI/RapidPunc). |
| | | 3. We acknowledge [ChinaTelecom](https://github.com/zhuzizyf/damo-fsmn-vad-infer-httpserver) for contributing the VAD runtime. |
| | | 4. We borrowed a lot of code from [FastASR](https://github.com/chenkui164/FastASR) for audio frontend and text-postprocess. |