From c229c401f3050f99b2501864ed3fcec88e367f22 Mon Sep 17 00:00:00 2001
From: 游雁 <zhifu.gzf@alibaba-inc.com>
Date: 星期三, 19 四月 2023 23:54:35 +0800
Subject: [PATCH] docs
---
docs/modescope_pipeline/vad_pipeline.md | 6 +++---
docs/modescope_pipeline/sv_pipeline.md | 6 +++---
docs/modescope_pipeline/punc_pipeline.md | 6 +++---
docs/modescope_pipeline/lm_pipeline.md | 6 +++---
docs/modescope_pipeline/asr_pipeline.md | 14 +++++++++-----
docs/modescope_pipeline/tp_pipeline.md | 6 +++---
6 files changed, 24 insertions(+), 20 deletions(-)
diff --git a/docs/modescope_pipeline/asr_pipeline.md b/docs/modescope_pipeline/asr_pipeline.md
index 8b7e8b8..db46de3 100644
--- a/docs/modescope_pipeline/asr_pipeline.md
+++ b/docs/modescope_pipeline/asr_pipeline.md
@@ -1,9 +1,13 @@
# Speech Recognition
+.. HINT::
+
+
+ The modelscope pipeline supports all the models in [model zoo] to inference and finetine. Here we take model of Paraformer and Paraformer-online as example to demonstrate the usage.
## Inference
### Quick start
-#### Paraformer model
+#### [Paraformer model](https://www.modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary)
```python
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks
@@ -16,7 +20,7 @@
rec_result = inference_pipeline(audio_in='https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_zh.wav')
print(rec_result)
```
-#### Paraformer-online
+#### [Paraformer-online model](https://www.modelscope.cn/models/damo/speech_paraformer_asr_nat-zh-cn-16k-common-vocab8404-online/summary)
```python
inference_pipeline = pipeline(
task=Tasks.auto_speech_recognition,
@@ -61,11 +65,11 @@
- `audio_fs`: audio sampling rate, only set when audio_in is pcm audio
-#### Inference with you data
+### Inference with you data
-#### Inference with multi-threads on CPU
+### Inference with multi-threads on CPU
-#### Inference with multi GPU
+### Inference with multi GPU
## Finetune with pipeline
diff --git a/docs/modescope_pipeline/lm_pipeline.md b/docs/modescope_pipeline/lm_pipeline.md
index cb81871..f0cf06b 100644
--- a/docs/modescope_pipeline/lm_pipeline.md
+++ b/docs/modescope_pipeline/lm_pipeline.md
@@ -2,9 +2,9 @@
## Inference with pipeline
### Quick start
-#### Inference with you data
-#### Inference with multi-threads on CPU
-#### Inference with multi GPU
+### Inference with you data
+### Inference with multi-threads on CPU
+### Inference with multi GPU
## Finetune with pipeline
### Quick start
diff --git a/docs/modescope_pipeline/punc_pipeline.md b/docs/modescope_pipeline/punc_pipeline.md
index 67ee695..a0203d7 100644
--- a/docs/modescope_pipeline/punc_pipeline.md
+++ b/docs/modescope_pipeline/punc_pipeline.md
@@ -4,11 +4,11 @@
### Quick start
-#### Inference with you data
+### Inference with you data
-#### Inference with multi-threads on CPU
+### Inference with multi-threads on CPU
-#### Inference with multi GPU
+### Inference with multi GPU
## Finetune with pipeline
diff --git a/docs/modescope_pipeline/sv_pipeline.md b/docs/modescope_pipeline/sv_pipeline.md
index 6ce8c6a..c57db38 100644
--- a/docs/modescope_pipeline/sv_pipeline.md
+++ b/docs/modescope_pipeline/sv_pipeline.md
@@ -4,11 +4,11 @@
### Quick start
-#### Inference with you data
+### Inference with you data
-#### Inference with multi-threads on CPU
+### Inference with multi-threads on CPU
-#### Inference with multi GPU
+### Inference with multi GPU
## Finetune with pipeline
diff --git a/docs/modescope_pipeline/tp_pipeline.md b/docs/modescope_pipeline/tp_pipeline.md
index fad55e3..9b1719b 100644
--- a/docs/modescope_pipeline/tp_pipeline.md
+++ b/docs/modescope_pipeline/tp_pipeline.md
@@ -4,11 +4,11 @@
### Quick start
-#### Inference with you data
+### Inference with you data
-#### Inference with multi-threads on CPU
+### Inference with multi-threads on CPU
-#### Inference with multi GPU
+### Inference with multi GPU
## Finetune with pipeline
diff --git a/docs/modescope_pipeline/vad_pipeline.md b/docs/modescope_pipeline/vad_pipeline.md
index 5dcbe59..fa7b647 100644
--- a/docs/modescope_pipeline/vad_pipeline.md
+++ b/docs/modescope_pipeline/vad_pipeline.md
@@ -4,11 +4,11 @@
### Quick start
-#### Inference with you data
+### Inference with you data
-#### Inference with multi-threads on CPU
+### Inference with multi-threads on CPU
-#### Inference with multi GPU
+### Inference with multi GPU
## Finetune with pipeline
--
Gitblit v1.9.1