From f28280a84cd9a36d8b9fa48ba53382823ee88c44 Mon Sep 17 00:00:00 2001
From: 游雁 <zhifu.gzf@alibaba-inc.com>
Date: 星期三, 19 四月 2023 18:57:02 +0800
Subject: [PATCH] docs

---
 docs/modescope_pipeline/asr_pipeline.md |   16 ++++++++++++++++
 1 files changed, 16 insertions(+), 0 deletions(-)

diff --git a/docs/modescope_pipeline/asr_pipeline.md b/docs/modescope_pipeline/asr_pipeline.md
index 715f110..f5bbe9f 100644
--- a/docs/modescope_pipeline/asr_pipeline.md
+++ b/docs/modescope_pipeline/asr_pipeline.md
@@ -17,6 +17,22 @@
 print(rec_result)
 ```
 
+#### API-docs
+##### define pipeline
+- `task`: `Tasks.auto_speech_recognition`
+- `model`: model name in [model zoo](https://alibaba-damo-academy.github.io/FunASR/en/modelscope_models.html#pretrained-models-on-modelscope), or model path in local disk
+- `ngpu`: 1 (Defalut), decoding on GPU. If ngpu=0, decoding on CPU
+- `ncpu`: 1 (Defalut), sets the number of threads used for intraop parallelism on CPU 
+- `output_dir`: None (Defalut), the output path of results if set
+- `batch_size`: 1 (Defalut), batch size when decoding
+##### infer pipeline
+- `audio_in`: the input to decode, which could be: 
+  - wav_path, `e.g.`: asr_example.wav, 
+  - pcm_path, 
+  - audio bytes stream
+  - audio sample point
+  - wav.scp
+
 #### Inference with you data
 
 #### Inference with multi-threads on CPU

--
Gitblit v1.9.1