| | |
| | | ([简体中文](./README_zh.md)|English) |
| | | |
| | | # Voice Activity Detection |
| | | |
| | | > **Note**: |
| | | > The modelscope pipeline supports all the models in [model zoo](https://alibaba-damo-academy.github.io/FunASR/en/modelscope_models.html#pretrained-models-on-modelscope) to inference and finetine. Here we take the model of FSMN-VAD as example to demonstrate the usage. |
| | | > The modelscope pipeline supports all the models in [model zoo](https://alibaba-damo-academy.github.io/FunASR/en/model_zoo/modelscope_models.html#pretrained-models-on-modelscope) to inference and finetune. Here we take the model of FSMN-VAD as example to demonstrate the usage. |
| | | |
| | | ## Inference |
| | | |
| | |
| | | #### [FSMN-VAD-online model](https://modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/summary) |
| | | ```python |
| | | inference_pipeline = pipeline( |
| | | task=Tasks.auto_speech_recognition, |
| | | task=Tasks.voice_activity_detection, |
| | | model='damo/speech_fsmn_vad_zh-cn-16k-common-pytorch', |
| | | ) |
| | | import soundfile |
| | |
| | | speech_chunk = speech[0:chunk_stride] |
| | | rec_result = inference_pipeline(audio_in=speech_chunk, param_dict=param_dict) |
| | | print(rec_result) |
| | | # next chunk, 480ms |
| | | # next chunk, 100ms |
| | | speech_chunk = speech[chunk_stride:chunk_stride+chunk_stride] |
| | | rec_result = inference_pipeline(audio_in=speech_chunk, param_dict=param_dict) |
| | | print(rec_result) |
| | |
| | | |
| | | |
| | | |
| | | #### API-reference |
| | | ##### Define pipeline |
| | | ### API-reference |
| | | #### Define pipeline |
| | | - `task`: `Tasks.voice_activity_detection` |
| | | - `model`: model name in [model zoo](https://alibaba-damo-academy.github.io/FunASR/en/modelscope_models.html#pretrained-models-on-modelscope), or model path in local disk |
| | | - `model`: model name in [model zoo](https://alibaba-damo-academy.github.io/FunASR/en/model_zoo/modelscope_models.html#pretrained-models-on-modelscope), or model path in local disk |
| | | - `ngpu`: `1` (Default), decoding on GPU. If ngpu=0, decoding on CPU |
| | | - `ncpu`: `1` (Default), sets the number of threads used for intraop parallelism on CPU |
| | | - `output_dir`: `None` (Default), the output path of results if set |
| | | - `batch_size`: `1` (Default), batch size when decoding |
| | | ##### Infer pipeline |
| | | #### Infer pipeline |
| | | - `audio_in`: the input to decode, which could be: |
| | | - wav_path, `e.g.`: asr_example.wav, |
| | | - pcm_path, `e.g.`: asr_example.pcm, |
| | | - audio bytes stream, `e.g.`: bytes data from a microphone |
| | | - audio sample point,`e.g.`: `audio, rate = soundfile.read("asr_example_zh.wav")`, the dtype is numpy.ndarray or torch.Tensor |
| | | - wav.scp, kaldi style wav list (`wav_id \t wav_path``), `e.g.`: |
| | | - wav.scp, kaldi style wav list (`wav_id \t wav_path`), `e.g.`: |
| | | ```text |
| | | asr_example1 ./audios/asr_example1.wav |
| | | asr_example2 ./audios/asr_example2.wav |
| | |
| | | ### Inference with multi-thread CPUs or multi GPUs |
| | | FunASR also offer recipes [egs_modelscope/vad/TEMPLATE/infer.sh](https://github.com/alibaba-damo-academy/FunASR/blob/main/egs_modelscope/vad/TEMPLATE/infer.sh) to decode with multi-thread CPUs, or multi GPUs. |
| | | |
| | | - Setting parameters in `infer.sh` |
| | | - `model`: model name in [model zoo](https://alibaba-damo-academy.github.io/FunASR/en/modelscope_models.html#pretrained-models-on-modelscope), or model path in local disk |
| | | - `data_dir`: the dataset dir needs to include `wav.scp` |
| | | - `output_dir`: output dir of the recognition results |
| | | - `batch_size`: `64` (Default), batch size of inference on gpu |
| | | - `gpu_inference`: `true` (Default), whether to perform gpu decoding, set false for CPU inference |
| | | - `gpuid_list`: `0,1` (Default), which gpu_ids are used to infer |
| | | - `njob`: only used for CPU inference (`gpu_inference`=`false`), `64` (Default), the number of jobs for CPU decoding |
| | | - `checkpoint_dir`: only used for infer finetuned models, the path dir of finetuned models |
| | | - `checkpoint_name`: only used for infer finetuned models, `valid.cer_ctc.ave.pb` (Default), which checkpoint is used to infer |
| | | #### Settings of `infer.sh` |
| | | - `model`: model name in [model zoo](https://alibaba-damo-academy.github.io/FunASR/en/model_zoo/modelscope_models.html#pretrained-models-on-modelscope), or model path in local disk |
| | | - `data_dir`: the dataset dir needs to include `wav.scp` |
| | | - `output_dir`: output dir of the recognition results |
| | | - `batch_size`: `64` (Default), batch size of inference on gpu |
| | | - `gpu_inference`: `true` (Default), whether to perform gpu decoding, set false for CPU inference |
| | | - `gpuid_list`: `0,1` (Default), which gpu_ids are used to infer |
| | | - `njob`: only used for CPU inference (`gpu_inference`=`false`), `64` (Default), the number of jobs for CPU decoding |
| | | - `checkpoint_dir`: only used for infer finetuned models, the path dir of finetuned models |
| | | - `checkpoint_name`: only used for infer finetuned models, `valid.cer_ctc.ave.pb` (Default), which checkpoint is used to infer |
| | | |
| | | - Decode with multi GPUs: |
| | | #### Decode with multi GPUs: |
| | | ```shell |
| | | bash infer.sh \ |
| | | --model "damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch" \ |
| | | --model "damo/speech_fsmn_vad_zh-cn-16k-common-pytorch" \ |
| | | --data_dir "./data/test" \ |
| | | --output_dir "./results" \ |
| | | --batch_size 64 \ |
| | | --batch_size 1 \ |
| | | --gpu_inference true \ |
| | | --gpuid_list "0,1" |
| | | ``` |
| | | - Decode with multi-thread CPUs: |
| | | #### Decode with multi-thread CPUs: |
| | | ```shell |
| | | bash infer.sh \ |
| | | --model "damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch" \ |
| | | --model "damo/speech_fsmn_vad_zh-cn-16k-common-pytorch" \ |
| | | --data_dir "./data/test" \ |
| | | --output_dir "./results" \ |
| | | --gpu_inference false \ |