From 32d2b3ec153e53176da710ebcc0aba5669effd8a Mon Sep 17 00:00:00 2001
From: yhliang <429259365@qq.com>
Date: 星期四, 27 四月 2023 17:45:00 +0800
Subject: [PATCH] update m2met2 docs

---
 funasr/runtime/python/websocket/README.md |    6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/funasr/runtime/python/websocket/README.md b/funasr/runtime/python/websocket/README.md
index 2c0dec1..73f8aeb 100644
--- a/funasr/runtime/python/websocket/README.md
+++ b/funasr/runtime/python/websocket/README.md
@@ -2,16 +2,16 @@
 We can send streaming audio data to server in real-time with grpc client every 300 ms e.g., and get transcribed text when stop speaking.
 The audio data is in streaming, the asr inference process is in offline.
 
-# Steps
 
 ## For the Server
 
 Install the modelscope and funasr
 
 ```shell
-pip install "modelscope[audio_asr]" -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html
+pip install -U modelscope funasr
+# For the users in China, you could install with the command:
+# pip install -U modelscope funasr -i https://mirror.sjtu.edu.cn/pypi/web/simple
 git clone https://github.com/alibaba/FunASR.git && cd FunASR
-pip install --editable ./
 ```
 
 Install the requirements for server

--
Gitblit v1.9.1