From 9a73f202d8108973b6aface320d3fcb38fc11bbd Mon Sep 17 00:00:00 2001
From: Yabin Li <wucong.lyb@alibaba-inc.com>
Date: 星期二, 08 八月 2023 11:01:51 +0800
Subject: [PATCH] Update SDK_advanced_guide_online.md

---
 funasr/runtime/docs/SDK_advanced_guide_online.md |   44 ++++----------------------------------------
 1 files changed, 4 insertions(+), 40 deletions(-)

diff --git a/funasr/runtime/docs/SDK_advanced_guide_online.md b/funasr/runtime/docs/SDK_advanced_guide_online.md
index 4d635ed..c9d6f7e 100644
--- a/funasr/runtime/docs/SDK_advanced_guide_online.md
+++ b/funasr/runtime/docs/SDK_advanced_guide_online.md
@@ -181,11 +181,10 @@
 Introduction to command parameters:
 
 ```text
---host: the IP address of the server. It can be set to 127.0.0.1 for local testing.
+--server-ip: the IP address of the server. It can be set to 127.0.0.1 for local testing.
 --port: the port number of the server listener.
---audio_in: the audio input. Input can be a path to a wav file or a wav.scp file (a Kaldi-formatted wav list in which each line includes a wav_id followed by a tab and a wav_path).
---output_dir: the path to the recognition result output.
---ssl: whether to use SSL encryption. The default is to use SSL.
+--wav-path: the audio input. Input can be a path to a wav file or a wav.scp file (a Kaldi-formatted wav list in which each line includes a wav_id followed by a tab and a wav_path).
+--is-ssl: whether to use SSL encryption. The default is to use SSL.
 --mode: offline mode.
 ```
 
@@ -195,8 +194,7 @@
 
 ```text
 # First communication
-{"mode": "offline", "wav_name": wav_name, "is_speaking": True}
-# Send wav data
+{"mode": "offline", "wav_name": "wav_name", "is_speaking": True, "wav_format":"pcm", "chunk_size":[5,10,5]}# Send wav data
 Bytes data
 # Send end flag
 {"is_speaking": False}
@@ -213,37 +211,3 @@
 ### Python client
 
 https://github.com/alibaba-damo-academy/FunASR/tree/main/funasr/runtime/python/websocket
-
-### C++ server
-
-#### VAD
-```c++
-// The use of the VAD model consists of two steps: FsmnVadInit and FsmnVadInfer:
-FUNASR_HANDLE vad_hanlde=FsmnVadInit(model_path, thread_num);
-// Where: model_path contains "model-dir" and "quantize", thread_num is the ONNX thread count;
-FUNASR_RESULT result=FsmnVadInfer(vad_hanlde, wav_file.c_str(), NULL, 16000);
-// Where: vad_hanlde is the return value of FunOfflineInit, wav_file is the path to the audio file, and sampling_rate is the sampling rate (default 16k).
-```
-
-See the usage example for details [docs](https://github.com/alibaba-damo-academy/FunASR/blob/main/funasr/runtime/onnxruntime/bin/funasr-onnx-offline-vad.cpp)
-
-#### ASR
-```text
-// The use of the ASR model consists of two steps: FunOfflineInit and FunOfflineInfer:
-FUNASR_HANDLE asr_hanlde=FunOfflineInit(model_path, thread_num);
-// Where: model_path contains "model-dir" and "quantize", thread_num is the ONNX thread count;
-FUNASR_RESULT result=FunOfflineInfer(asr_hanlde, wav_file.c_str(), RASR_NONE, NULL, 16000);
-// Where: asr_hanlde is the return value of FunOfflineInit, wav_file is the path to the audio file, and sampling_rate is the sampling rate (default 16k).
-```
-
-See the usage example for details, [docs](https://github.com/alibaba-damo-academy/FunASR/blob/main/funasr/runtime/onnxruntime/bin/funasr-onnx-offline.cpp)
-
-#### PUNC
-```text
-// The use of the PUNC model consists of two steps: CTTransformerInit and CTTransformerInfer:
-FUNASR_HANDLE punc_hanlde=CTTransformerInit(model_path, thread_num);
-// Where: model_path contains "model-dir" and "quantize", thread_num is the ONNX thread count;
-FUNASR_RESULT result=CTTransformerInfer(punc_hanlde, txt_str.c_str(), RASR_NONE, NULL);
-// Where: punc_hanlde is the return value of CTTransformerInit, txt_str is the text
-```
-See the usage example for details, [docs](https://github.com/alibaba-damo-academy/FunASR/blob/main/funasr/runtime/onnxruntime/bin/funasr-onnx-offline-punc.cpp)

--
Gitblit v1.9.1