| | |
| | | # Speech Recognition |
| | | |
| | | > **Note**: |
| | | > The modelscope pipeline supports all the models in [model zoo](https://alibaba-damo-academy.github.io/FunASR/en/modelscope_models.html#pretrained-models-on-modelscope) to inference and finetine. Here we take typic model as example to demonstrate the usage. |
| | | > The modelscope pipeline supports all the models in [model zoo](https://alibaba-damo-academy.github.io/FunASR/en/modelscope_models.html#pretrained-models-on-modelscope) to inference and finetine. Here we take the typic models as examples to demonstrate the usage. |
| | | |
| | | ## Inference |
| | | |
| | |
| | | #### [RNN-T-online model]() |
| | | Undo |
| | | |
| | | #### [MFCCA Model](https://www.modelscope.cn/models/NPU-ASLP/speech_mfcca_asr-zh-cn-16k-alimeeting-vocab4950/summary) |
| | | For more model detailes, please refer to [docs](https://www.modelscope.cn/models/NPU-ASLP/speech_mfcca_asr-zh-cn-16k-alimeeting-vocab4950/summary) |
| | | ```python |
| | | from modelscope.pipelines import pipeline |
| | | from modelscope.utils.constant import Tasks |
| | | |
| | | inference_pipeline = pipeline( |
| | | task=Tasks.auto_speech_recognition, |
| | | model='NPU-ASLP/speech_mfcca_asr-zh-cn-16k-alimeeting-vocab4950', |
| | | model_revision='v3.0.0' |
| | | ) |
| | | |
| | | rec_result = inference_pipeline(audio_in='https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.wav') |
| | | print(rec_result) |
| | | ``` |
| | | |
| | | #### API-reference |
| | | ##### Define pipeline |
| | | - `task`: `Tasks.auto_speech_recognition` |
| | | - `model`: model name in [model zoo](https://alibaba-damo-academy.github.io/FunASR/en/modelscope_models.html#pretrained-models-on-modelscope), or model path in local disk |
| | | - `ngpu`: `1` (Defalut), decoding on GPU. If ngpu=0, decoding on CPU |
| | | - `ncpu`: `1` (Defalut), sets the number of threads used for intraop parallelism on CPU |
| | | - `output_dir`: `None` (Defalut), the output path of results if set |
| | | - `batch_size`: `1` (Defalut), batch size when decoding |
| | | - `ngpu`: `1` (Default), decoding on GPU. If ngpu=0, decoding on CPU |
| | | - `ncpu`: `1` (Default), sets the number of threads used for intraop parallelism on CPU |
| | | - `output_dir`: `None` (Default), the output path of results if set |
| | | - `batch_size`: `1` (Default), batch size when decoding |
| | | ##### Infer pipeline |
| | | - `audio_in`: the input to decode, which could be: |
| | | - wav_path, `e.g.`: asr_example.wav, |
| | | - pcm_path, `e.g.`: asr_example.pcm, |
| | | - audio bytes stream, `e.g.`: bytes data from a microphone |
| | | - audio sample point,`e.g.`: `audio, rate = soundfile.read("asr_example_zh.wav")`, the dtype is numpy.ndarray or torch.Tensor |
| | | - wav.scp, kaldi style wav list (`wav_id \t wav_path``), `e.g.`: |
| | | - wav.scp, kaldi style wav list (`wav_id \t wav_path`), `e.g.`: |
| | | ```text |
| | | asr_example1 ./audios/asr_example1.wav |
| | | asr_example2 ./audios/asr_example2.wav |
| | | ``` |
| | | In this case of `wav.scp` input, `output_dir` must be set to save the output results |
| | | - `audio_fs`: audio sampling rate, only set when audio_in is pcm audio |
| | | - `output_dir`: None (Defalut), the output path of results if set |
| | | - `output_dir`: None (Default), the output path of results if set |
| | | |
| | | ### Inference with multi-thread CPUs or multi GPUs |
| | | FunASR also offer recipes [infer.sh](https://github.com/alibaba-damo-academy/FunASR/blob/main/egs_modelscope/asr/TEMPLATE/infer.sh) to decode with multi-thread CPUs, or multi GPUs. |
| | | FunASR also offer recipes [egs_modelscope/asr/TEMPLATE/infer.sh](https://github.com/alibaba-damo-academy/FunASR/blob/main/egs_modelscope/asr/TEMPLATE/infer.sh) to decode with multi-thread CPUs, or multi GPUs. |
| | | |
| | | - Setting parameters in `infer.sh` |
| | | - `model`: model name in [model zoo](https://alibaba-damo-academy.github.io/FunASR/en/modelscope_models.html#pretrained-models-on-modelscope), or model path in local disk |
| | |
| | | - `njob`: only used for CPU inference (`gpu_inference`=`false`), `64` (Default), the number of jobs for CPU decoding |
| | | - `checkpoint_dir`: only used for infer finetuned models, the path dir of finetuned models |
| | | - `checkpoint_name`: only used for infer finetuned models, `valid.cer_ctc.ave.pb` (Default), which checkpoint is used to infer |
| | | - `decoding_mode`: `normal` (Default), decoding mode for UniASR model(fast、normal、offline) |
| | | - `hotword_txt`: `None` (Default), hotword file for contextual paraformer model(the hotword file name ends with .txt") |
| | | |
| | | - Decode with multi GPUs: |
| | | ```shell |
| | |
| | | - `max_epoch`: number of training epoch |
| | | - `lr`: learning rate |
| | | |
| | | - Training data formats: |
| | | ```sh |
| | | cat ./example_data/text |
| | | BAC009S0002W0122 而 对 楼 市 成 交 抑 制 作 用 最 大 的 限 购 |
| | | BAC009S0002W0123 也 成 为 地 方 政 府 的 眼 中 钉 |
| | | english_example_1 hello world |
| | | english_example_2 go swim 去 游 泳 |
| | | |
| | | cat ./example_data/wav.scp |
| | | BAC009S0002W0122 /mnt/data/wav/train/S0002/BAC009S0002W0122.wav |
| | | BAC009S0002W0123 /mnt/data/wav/train/S0002/BAC009S0002W0123.wav |
| | | english_example_1 /mnt/data/wav/train/S0002/english_example_1.wav |
| | | english_example_2 /mnt/data/wav/train/S0002/english_example_2.wav |
| | | ``` |
| | | |
| | | - Then you can run the pipeline to finetune with: |
| | | ```shell |
| | | python finetune.py |
| | |
| | | ``` |
| | | ## Inference with your finetuned model |
| | | |
| | | - Setting parameters in [infer.sh](https://github.com/alibaba-damo-academy/FunASR/blob/main/egs_modelscope/asr/TEMPLATE/infer.sh) is the same with [docs](https://github.com/alibaba-damo-academy/FunASR/tree/main/egs_modelscope/asr/TEMPLATE#inference-with-multi-thread-cpus-or-multi-gpus) |
| | | - Setting parameters in [egs_modelscope/asr/TEMPLATE/infer.sh](https://github.com/alibaba-damo-academy/FunASR/blob/main/egs_modelscope/asr/TEMPLATE/infer.sh) is the same with [docs](https://github.com/alibaba-damo-academy/FunASR/tree/main/egs_modelscope/asr/TEMPLATE#inference-with-multi-thread-cpus-or-multi-gpus), `model` is the model name from modelscope, which you finetuned. |
| | | |
| | | - Decode with multi GPUs: |
| | | ```shell |
| | |
| | | --njob 64 \ |
| | | --checkpoint_dir "./checkpoint" \ |
| | | --checkpoint_name "valid.cer_ctc.ave.pb" |
| | | ``` |
| | | ``` |