From 331d57253ae25dd42c8e14930dee30cd8d2affa6 Mon Sep 17 00:00:00 2001 From: zhifu gao <zhifu.gzf@alibaba-inc.com> Date: 星期一, 24 四月 2023 11:41:18 +0800 Subject: [PATCH] Merge pull request #408 from alibaba-damo-academy/hnluo-patch-1 --- funasr/runtime/python/websocket/README.md | 6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diff --git a/funasr/runtime/python/websocket/README.md b/funasr/runtime/python/websocket/README.md index 2c0dec1..73f8aeb 100644 --- a/funasr/runtime/python/websocket/README.md +++ b/funasr/runtime/python/websocket/README.md @@ -2,16 +2,16 @@ We can send streaming audio data to server in real-time with grpc client every 300 ms e.g., and get transcribed text when stop speaking. The audio data is in streaming, the asr inference process is in offline. -# Steps ## For the Server Install the modelscope and funasr ```shell -pip install "modelscope[audio_asr]" -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html +pip install -U modelscope funasr +# For the users in China, you could install with the command: +# pip install -U modelscope funasr -i https://mirror.sjtu.edu.cn/pypi/web/simple git clone https://github.com/alibaba/FunASR.git && cd FunASR -pip install --editable ./ ``` Install the requirements for server -- Gitblit v1.9.1