Note:
The modelscope pipeline supports all the models in model zoo to inference and finetine. Here we take model of FSMN-VAD as example to demonstrate the usage.
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks
inference_pipeline = pipeline(
task=Tasks.voice_activity_detection,
model='damo/speech_fsmn_vad_zh-cn-16k-common-pytorch',
)
segments_result = inference_pipeline(audio_in='https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/vad_example.wav')
print(segments_result)
inference_pipeline = pipeline(
task=Tasks.auto_speech_recognition,
model='damo/speech_fsmn_vad_zh-cn-16k-common-pytorch',
)
import soundfile
speech, sample_rate = soundfile.read("example/asr_example.wav")
param_dict = {"in_cache": dict(), "is_final": False}
chunk_stride = 1600# 100ms
# first chunk, 100ms
speech_chunk = speech[0:chunk_stride]
rec_result = inference_pipeline(audio_in=speech_chunk, param_dict=param_dict)
print(rec_result)
# next chunk, 480ms
speech_chunk = speech[chunk_stride:chunk_stride+chunk_stride]
rec_result = inference_pipeline(audio_in=speech_chunk, param_dict=param_dict)
print(rec_result)
Full code of demo, please ref to demo
task: Tasks.auto_speech_recognitionmodel: model name in model zoo, or model path in local diskngpu: 1 (Defalut), decoding on GPU. If ngpu=0, decoding on CPUncpu: 1 (Defalut), sets the number of threads used for intraop parallelism on CPUoutput_dir: None (Defalut), the output path of results if setbatch_size: 1 (Defalut), batch size when decoding
audio_in: the input to decode, which could be:e.g.: asr_example.wav,e.g.: asr_example.pcm,e.g.: bytes data from a microphonee.g.: audio, rate = soundfile.read("asr_example_zh.wav"), the dtype is numpy.ndarray or torch.Tensorwav_id \t wav_path``),e.g.: ```cat wav.scp asr_example1 ./audios/asr_example1.wav asr_example2 ./audios/asr_example2.wav ``` In this case ofwav.scpinput,output_dir` must be set to save the output resultsaudio_fs: audio sampling rate, only set when audio_in is pcm audiooutput_dir: None (Defalut), the output path of results if setFunASR also offer recipes infer.sh to decode with multi-thread CPUs, or multi GPUs.
Setting parameters in infer.sh
${data_dir}/wav.scp. If ${data_dir}/text is also exists, CER will be computedgpu_inference=false, use CPU decoding, please set njobDecode with multi GPUs:shell bash infer.sh \ --model "damo/speech_fsmn_vad_zh-cn-16k-common-pytorch" \ --data_dir "./data/test" \ --output_dir "./results" \ --gpu_inference true \ --gpuid_list "0,1"
shell bash infer.sh \ --model "damo/speech_fsmn_vad_zh-cn-16k-common-pytorch" \ --data_dir "./data/test" \ --output_dir "./results" \ --gpu_inference false \ --njob 64 Results
The decoding results can be found in $output_dir/1best_recog/text.cer, which includes recognition results of each sample and the CER metric of the whole test set.
If you decode the SpeechIO test sets, you can use textnorm with stage=3, and DETAILS.txt, RESULTS.txt record the results and CER after text normalization.