From a05e753d11d9c36983ec4e58c421dbcf86d1dcd4 Mon Sep 17 00:00:00 2001
From: Xian Shi <40013335+R1ckShi@users.noreply.github.com>
Date: 星期二, 17 十月 2023 16:47:27 +0800
Subject: [PATCH] Merge branch 'main' into dev_onnx
---
funasr/runtime/python/onnxruntime/README.md | 31 +++++++++++++++++++++++--------
1 files changed, 23 insertions(+), 8 deletions(-)
diff --git a/funasr/runtime/python/onnxruntime/README.md b/funasr/runtime/python/onnxruntime/README.md
index 4379965..ceeb459 100644
--- a/funasr/runtime/python/onnxruntime/README.md
+++ b/funasr/runtime/python/onnxruntime/README.md
@@ -1,13 +1,17 @@
# ONNXRuntime-python
-
-## Install `funasr_onnx`
+## Install `funasr-onnx`
install from pip
+
```shell
-pip install -U funasr_onnx
+pip install -U funasr-onnx
# For the users in China, you could install with the command:
-# pip install -U funasr_onnx -i https://mirror.sjtu.edu.cn/pypi/web/simple
+# pip install -U funasr-onnx -i https://mirror.sjtu.edu.cn/pypi/web/simple
+# If you want to export .onnx file, you should install modelscope and funasr
+pip install -U modelscope funasr
+# For the users in China, you could install with the command:
+# pip install -U modelscope funasr -i https://mirror.sjtu.edu.cn/pypi/web/simple
```
or install from source code
@@ -23,7 +27,9 @@
## Inference with runtime
### Speech Recognition
+
#### Paraformer
+
```python
from funasr_onnx import Paraformer
from pathlib import Path
@@ -36,6 +42,7 @@
result = model(wav_path)
print(result)
```
+
- `model_dir`: model_name in modelscope or local path downloaded from modelscope. If the local path is set, it should contain `model.onnx`, `config.yaml`, `am.mvn`
- `batch_size`: `1` (Default), the batch size duration inference
- `device_id`: `-1` (Default), infer on CPU. If you want to infer with GPU, set it to gpu_id (Please make sure that you have install the onnxruntime-gpu)
@@ -49,7 +56,9 @@
#### Paraformer-online
### Voice Activity Detection
+
#### FSMN-VAD
+
```python
from funasr_onnx import Fsmn_vad
from pathlib import Path
@@ -62,6 +71,7 @@
result = model(wav_path)
print(result)
```
+
- `model_dir`: model_name in modelscope or local path downloaded from modelscope. If the local path is set, it should contain `model.onnx`, `config.yaml`, `am.mvn`
- `batch_size`: `1` (Default), the batch size duration inference
- `device_id`: `-1` (Default), infer on CPU. If you want to infer with GPU, set it to gpu_id (Please make sure that you have install the onnxruntime-gpu)
@@ -72,8 +82,8 @@
Output: `List[str]`: recognition result
-
#### FSMN-VAD-online
+
```python
from funasr_onnx import Fsmn_vad_online
import soundfile
@@ -104,6 +114,7 @@
if segments_result:
print(segments_result)
```
+
- `model_dir`: model_name in modelscope or local path downloaded from modelscope. If the local path is set, it should contain `model.onnx`, `config.yaml`, `am.mvn`
- `batch_size`: `1` (Default), the batch size duration inference
- `device_id`: `-1` (Default), infer on CPU. If you want to infer with GPU, set it to gpu_id (Please make sure that you have install the onnxruntime-gpu)
@@ -114,9 +125,10 @@
Output: `List[str]`: recognition result
-
### Punctuation Restoration
+
#### CT-Transformer
+
```python
from funasr_onnx import CT_Transformer
@@ -127,6 +139,7 @@
result = model(text_in)
print(result[0])
```
+
- `model_dir`: model_name in modelscope or local path downloaded from modelscope. If the local path is set, it should contain `model.onnx`, `config.yaml`, `am.mvn`
- `device_id`: `-1` (Default), infer on CPU. If you want to infer with GPU, set it to gpu_id (Please make sure that you have install the onnxruntime-gpu)
- `quantize`: `False` (Default), load the model of `model.onnx` in `model_dir`. If set `True`, load the model of `model_quant.onnx` in `model_dir`
@@ -136,8 +149,8 @@
Output: `List[str]`: recognition result
-
#### CT-Transformer-online
+
```python
from funasr_onnx import CT_Transformer_VadRealtime
@@ -155,6 +168,7 @@
print(rec_result_all)
```
+
- `model_dir`: model_name in modelscope or local path downloaded from modelscope. If the local path is set, it should contain `model.onnx`, `config.yaml`, `am.mvn`
- `device_id`: `-1` (Default), infer on CPU. If you want to infer with GPU, set it to gpu_id (Please make sure that you have install the onnxruntime-gpu)
- `quantize`: `False` (Default), load the model of `model.onnx` in `model_dir`. If set `True`, load the model of `model_quant.onnx` in `model_dir`
@@ -166,8 +180,9 @@
## Performance benchmark
-Please ref to [benchmark](https://github.com/alibaba-damo-academy/FunASR/blob/main/funasr/runtime/python/benchmark_onnx.md)
+Please ref to [benchmark](https://github.com/alibaba-damo-academy/FunASR/blob/main/funasr/runtime/docs/benchmark_onnx.md)
## Acknowledge
+
1. This project is maintained by [FunASR community](https://github.com/alibaba-damo-academy/FunASR).
2. We partially refer [SWHL](https://github.com/RapidAI/RapidASR) for onnxruntime (only for paraformer model).
--
Gitblit v1.9.1