lzr265946
2023-02-13 0a7c0661b13e299f68c458d19fc766839cd01bfb
add paraformer-large-contextual egs_modelscope
2个文件已添加
40 ■■■■■ 已修改文件
egs_modelscope/asr/paraformer/speech_paraformer-large-contextual_asr_nat-zh-cn-16k-common-vocab8404/README.md 19 ●●●●● 补丁 | 查看 | 原始文档 | blame | 历史
egs_modelscope/asr/paraformer/speech_paraformer-large-contextual_asr_nat-zh-cn-16k-common-vocab8404/infer.py 21 ●●●●● 补丁 | 查看 | 原始文档 | blame | 历史
egs_modelscope/asr/paraformer/speech_paraformer-large-contextual_asr_nat-zh-cn-16k-common-vocab8404/README.md
New file
@@ -0,0 +1,19 @@
# ModelScope Model
## How to infer using a pretrained Paraformer-large Model
### Inference
You can use the pretrain model for inference directly.
- Setting parameters in `infer.py`
    - <strong>audio_in:</strong> # Support wav, url, bytes, and parsed audio format.
    - <strong>output_dir:</strong> # If the input format is wav.scp, it needs to be set.
    - <strong>batch_size:</strong> # Set batch size in inference.
    - <strong>param_dict:</strong> # Set the hotword list in inference.
- Then you can run the pipeline to infer with:
```python
    python infer.py
```
egs_modelscope/asr/paraformer/speech_paraformer-large-contextual_asr_nat-zh-cn-16k-common-vocab8404/infer.py
New file
@@ -0,0 +1,21 @@
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks
if __name__ == '__main__':
    param_dict = dict()
    param_dict['hotword'] = "https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/hotword.txt"
    audio_in = "//isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_hotword.wav"
    output_dir = None
    batch_size = 1
    inference_pipeline = pipeline(
        task=Tasks.auto_speech_recognition,
        model="damo/speech_paraformer-large-contextual_asr_nat-zh-cn-16k-common-vocab8404",
        output_dir=output_dir,
        batch_size=batch_size,
        param_dict=param_dict)
    rec_result = inference_pipeline(audio_in=audio_in)
    print(rec_result)