雾聪
2023-08-08 e922505e80c355d102b475596e5e87cee14bea4b
Merge branch 'main' of https://github.com/alibaba-damo-academy/FunASR into main
12个文件已修改
24 ■■■■ 已修改文件
egs_modelscope/asr/TEMPLATE/infer.py 1 ●●●● 补丁 | 查看 | 原始文档 | blame | 历史
egs_modelscope/asr/paraformer/speech_paraformer-large-contextual_asr_nat-zh-cn-16k-common-vocab8404/README_zh.md 2 ●●● 补丁 | 查看 | 原始文档 | blame | 历史
egs_modelscope/asr/paraformer/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-online/README_zh.md 2 ●●● 补丁 | 查看 | 原始文档 | blame | 历史
egs_modelscope/asr/paraformer/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/README_zh.md 2 ●●● 补丁 | 查看 | 原始文档 | blame | 历史
egs_modelscope/asr/paraformer/speech_paraformer-tiny-commandword_asr_nat-zh-cn-16k-vocab544-pytorch/infer.py 3 ●●●● 补丁 | 查看 | 原始文档 | blame | 历史
egs_modelscope/asr/paraformer/speech_paraformer_asr-en-16k-vocab4199-pytorch/README_zh.md 2 ●●● 补丁 | 查看 | 原始文档 | blame | 历史
egs_modelscope/asr/paraformer/speech_paraformer_asr_nat-zh-cn-16k-aishell1-vocab4234-pytorch/README_zh.md 2 ●●● 补丁 | 查看 | 原始文档 | blame | 历史
egs_modelscope/asr/paraformer/speech_paraformer_asr_nat-zh-cn-16k-aishell2-vocab5212-pytorch/README_zh.md 2 ●●● 补丁 | 查看 | 原始文档 | blame | 历史
egs_modelscope/asr/paraformer/speech_paraformer_asr_nat-zh-cn-16k-common-vocab8404-online/README_zh.md 2 ●●● 补丁 | 查看 | 原始文档 | blame | 历史
egs_modelscope/asr/paraformer/speech_paraformer_asr_nat-zh-cn-8k-common-vocab8358-tensorflow1/README.md 2 ●●● 补丁 | 查看 | 原始文档 | blame | 历史
egs_modelscope/asr/paraformer/speech_paraformer_asr_nat-zh-cn-8k-common-vocab8358-tensorflow1/README_zh.md 2 ●●● 补丁 | 查看 | 原始文档 | blame | 历史
funasr/version.txt 2 ●●● 补丁 | 查看 | 原始文档 | blame | 历史
egs_modelscope/asr/TEMPLATE/infer.py
@@ -11,6 +11,7 @@
        model=args.model,
        output_dir=args.output_dir,
        batch_size=args.batch_size,
        update_model=False,
        param_dict={"decoding_model": args.decoding_mode, "hotword": args.hotword_txt}
    )
    inference_pipeline(audio_in=args.audio_in)
egs_modelscope/asr/paraformer/speech_paraformer-large-contextual_asr_nat-zh-cn-16k-common-vocab8404/README_zh.md
@@ -1 +1 @@
../TEMPLATE/README_zh.md
../../TEMPLATE/README_zh.md
egs_modelscope/asr/paraformer/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-online/README_zh.md
@@ -1 +1 @@
../TEMPLATE/README_zh.md
../../TEMPLATE/README_zh.md
egs_modelscope/asr/paraformer/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/README_zh.md
@@ -1 +1 @@
../TEMPLATE/README_zh.md
../../TEMPLATE/README_zh.md
egs_modelscope/asr/paraformer/speech_paraformer-tiny-commandword_asr_nat-zh-cn-16k-vocab544-pytorch/infer.py
@@ -20,7 +20,8 @@
        task=Tasks.auto_speech_recognition,
        model="damo/speech_paraformer-tiny-commandword_asr_nat-zh-cn-16k-vocab544-pytorch",
        output_dir=output_dir_job,
        batch_size=64
        batch_size=64,
        update_model=False,
    )
    audio_in = os.path.join(split_dir, "wav.{}.scp".format(idx))
    inference_pipeline(audio_in=audio_in)
egs_modelscope/asr/paraformer/speech_paraformer_asr-en-16k-vocab4199-pytorch/README_zh.md
@@ -1 +1 @@
../TEMPLATE/README_zh.md
../../TEMPLATE/README_zh.md
egs_modelscope/asr/paraformer/speech_paraformer_asr_nat-zh-cn-16k-aishell1-vocab4234-pytorch/README_zh.md
@@ -1 +1 @@
../TEMPLATE/README_zh.md
../../TEMPLATE/README_zh.md
egs_modelscope/asr/paraformer/speech_paraformer_asr_nat-zh-cn-16k-aishell2-vocab5212-pytorch/README_zh.md
@@ -1 +1 @@
../TEMPLATE/README_zh.md
../../TEMPLATE/README_zh.md
egs_modelscope/asr/paraformer/speech_paraformer_asr_nat-zh-cn-16k-common-vocab8404-online/README_zh.md
@@ -1 +1 @@
../TEMPLATE/README_zh.md
../../TEMPLATE/README_zh.md
egs_modelscope/asr/paraformer/speech_paraformer_asr_nat-zh-cn-8k-common-vocab8358-tensorflow1/README.md
@@ -1 +1 @@
../TEMPLATE/README.md
../../TEMPLATE/README.md
egs_modelscope/asr/paraformer/speech_paraformer_asr_nat-zh-cn-8k-common-vocab8358-tensorflow1/README_zh.md
@@ -1 +1 @@
../TEMPLATE/README_zh.md
../../TEMPLATE/README_zh.md
funasr/version.txt
@@ -1 +1 @@
0.7.1
0.7.2