游雁
2023-04-20 3927728a432079c54e442c22bb6389c2753df853
docs
3个文件已修改
1个文件已添加
50 ■■■■■ 已修改文件
docs/index.rst 1 ●●●● 补丁 | 查看 | 原始文档 | blame | 历史
docs/modescope_pipeline/lm_pipeline.md 2 ●●● 补丁 | 查看 | 原始文档 | blame | 历史
docs/modescope_pipeline/quick_start.md 27 ●●●●● 补丁 | 查看 | 原始文档 | blame | 历史
docs/modescope_pipeline/sd_pipeline.md 20 ●●●●● 补丁 | 查看 | 原始文档 | blame | 历史
docs/index.rst
@@ -47,6 +47,7 @@
   ./modescope_pipeline/punc_pipeline.md
   ./modescope_pipeline/tp_pipeline.md
   ./modescope_pipeline/sv_pipeline.md
   ./modescope_pipeline/sd_pipeline.md
   ./modescope_pipeline/lm_pipeline.md
.. toctree::
docs/modescope_pipeline/lm_pipeline.md
@@ -1,4 +1,4 @@
# Speech Recognition
# Language Models
## Inference with pipeline
### Quick start
docs/modescope_pipeline/quick_start.md
@@ -87,6 +87,33 @@
print(rec_result["scores"][0])
```
### Speaker diarization
#### SOND
```python
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks
inference_diar_pipline = pipeline(
    mode="sond_demo",
    num_workers=0,
    task=Tasks.speaker_diarization,
    diar_model_config="sond.yaml",
    model='damo/speech_diarization_sond-en-us-callhome-8k-n16k4-pytorch',
    sv_model="damo/speech_xvector_sv-en-us-callhome-8k-spk6135-pytorch",
    sv_model_revision="master",
)
audio_list=[
    "https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_data/record.wav",
    "https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_data/spk_A.wav",
    "https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_data/spk_B.wav",
    "https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_data/spk_B1.wav"
]
results = inference_diar_pipline(audio_in=audio_list)
print(results)
```
### FAQ
#### How to switch device from GPU to CPU with pipeline
docs/modescope_pipeline/sd_pipeline.md
New file
@@ -0,0 +1,20 @@
# Speaker diarization
## Inference with pipeline
### Quick start
### Inference with you data
### Inference with multi-threads on CPU
### Inference with multi GPU
## Finetune with pipeline
### Quick start
### Finetune with your data
## Inference with your finetuned model