From 65396eeeff96cdc21f939828e13a2e3d0127f2c6 Mon Sep 17 00:00:00 2001 From: zhifu gao <zhifu.gzf@alibaba-inc.com> Date: 星期五, 26 一月 2024 11:26:48 +0800 Subject: [PATCH] vad streaming return [beg, -1], [], [-1, end], [beg, end] (#1306) --- docs/modelscope_pipeline/quick_start.md | 7 ++++--- 1 files changed, 4 insertions(+), 3 deletions(-) diff --git a/docs/modelscope_pipeline/quick_start.md b/docs/modelscope_pipeline/quick_start.md index 436fb1d..2b9219b 100644 --- a/docs/modelscope_pipeline/quick_start.md +++ b/docs/modelscope_pipeline/quick_start.md @@ -1,7 +1,9 @@ +([绠�浣撲腑鏂嘳(./quick_start_zh.md)|English) + # Quick Start > **Note**: -> The modelscope pipeline supports all the models in [model zoo](https://alibaba-damo-academy.github.io/FunASR/en/modelscope_models.html#pretrained-models-on-modelscope) to inference and finetine. Here we take typic model as example to demonstrate the usage. +> The modelscope pipeline supports all the models in [model zoo](https://alibaba-damo-academy.github.io/FunASR/en/model_zoo/modelscope_models.html#pretrained-models-on-modelscope) to inference and finetine. Here we take typic model as example to demonstrate the usage. ## Inference with pipeline @@ -221,5 +223,4 @@ If you want finetune with multi-GPUs, you could: ```shell CUDA_VISIBLE_DEVICES=1,2 python -m torch.distributed.launch --nproc_per_node 2 finetune.py > log.txt 2>&1 -``` - +``` \ No newline at end of file -- Gitblit v1.9.1