From 5b2fcc19af53e7a916948a0b4ddcf3624a428d52 Mon Sep 17 00:00:00 2001
From: 游雁 <zhifu.gzf@alibaba-inc.com>
Date: 星期二, 08 八月 2023 17:24:01 +0800
Subject: [PATCH] docs ssl
---
funasr/runtime/docs/SDK_advanced_guide_online.md | 48 ++++++++----------------------------------------
1 files changed, 8 insertions(+), 40 deletions(-)
diff --git a/funasr/runtime/docs/SDK_advanced_guide_online.md b/funasr/runtime/docs/SDK_advanced_guide_online.md
index 4d635ed..f5137d7 100644
--- a/funasr/runtime/docs/SDK_advanced_guide_online.md
+++ b/funasr/runtime/docs/SDK_advanced_guide_online.md
@@ -181,12 +181,13 @@
Introduction to command parameters:
```text
---host: the IP address of the server. It can be set to 127.0.0.1 for local testing.
+--server-ip: the IP address of the server. It can be set to 127.0.0.1 for local testing.
--port: the port number of the server listener.
---audio_in: the audio input. Input can be a path to a wav file or a wav.scp file (a Kaldi-formatted wav list in which each line includes a wav_id followed by a tab and a wav_path).
---output_dir: the path to the recognition result output.
---ssl: whether to use SSL encryption. The default is to use SSL.
---mode: offline mode.
+--wav-path: the audio input. Input can be a path to a wav file or a wav.scp file (a Kaldi-formatted wav list in which each line includes a wav_id followed by a tab and a wav_path).
+--is-ssl: whether to use SSL encryption. The default is to use SSL.
+--mode: 2pass.
+--thread-num 1
+
```
### Custom client
@@ -195,8 +196,9 @@
```text
# First communication
-{"mode": "offline", "wav_name": wav_name, "is_speaking": True}
+{"mode": "offline", "wav_name": "wav_name", "is_speaking": True, "wav_format":"pcm", "chunk_size":[5,10,5]}
# Send wav data
+
Bytes data
# Send end flag
{"is_speaking": False}
@@ -213,37 +215,3 @@
### Python client
https://github.com/alibaba-damo-academy/FunASR/tree/main/funasr/runtime/python/websocket
-
-### C++ server
-
-#### VAD
-```c++
-// The use of the VAD model consists of two steps: FsmnVadInit and FsmnVadInfer:
-FUNASR_HANDLE vad_hanlde=FsmnVadInit(model_path, thread_num);
-// Where: model_path contains "model-dir" and "quantize", thread_num is the ONNX thread count;
-FUNASR_RESULT result=FsmnVadInfer(vad_hanlde, wav_file.c_str(), NULL, 16000);
-// Where: vad_hanlde is the return value of FunOfflineInit, wav_file is the path to the audio file, and sampling_rate is the sampling rate (default 16k).
-```
-
-See the usage example for details [docs](https://github.com/alibaba-damo-academy/FunASR/blob/main/funasr/runtime/onnxruntime/bin/funasr-onnx-offline-vad.cpp)
-
-#### ASR
-```text
-// The use of the ASR model consists of two steps: FunOfflineInit and FunOfflineInfer:
-FUNASR_HANDLE asr_hanlde=FunOfflineInit(model_path, thread_num);
-// Where: model_path contains "model-dir" and "quantize", thread_num is the ONNX thread count;
-FUNASR_RESULT result=FunOfflineInfer(asr_hanlde, wav_file.c_str(), RASR_NONE, NULL, 16000);
-// Where: asr_hanlde is the return value of FunOfflineInit, wav_file is the path to the audio file, and sampling_rate is the sampling rate (default 16k).
-```
-
-See the usage example for details, [docs](https://github.com/alibaba-damo-academy/FunASR/blob/main/funasr/runtime/onnxruntime/bin/funasr-onnx-offline.cpp)
-
-#### PUNC
-```text
-// The use of the PUNC model consists of two steps: CTTransformerInit and CTTransformerInfer:
-FUNASR_HANDLE punc_hanlde=CTTransformerInit(model_path, thread_num);
-// Where: model_path contains "model-dir" and "quantize", thread_num is the ONNX thread count;
-FUNASR_RESULT result=CTTransformerInfer(punc_hanlde, txt_str.c_str(), RASR_NONE, NULL);
-// Where: punc_hanlde is the return value of CTTransformerInit, txt_str is the text
-```
-See the usage example for details, [docs](https://github.com/alibaba-damo-academy/FunASR/blob/main/funasr/runtime/onnxruntime/bin/funasr-onnx-offline-punc.cpp)
--
Gitblit v1.9.1