From 6d3a3da8a8c7d1be9740a9b2d6fac767f8dfff17 Mon Sep 17 00:00:00 2001
From: 游雁 <zhifu.gzf@alibaba-inc.com>
Date: 星期四, 30 五月 2024 19:16:52 +0800
Subject: [PATCH] docs

---
 README.md |   34 ++++++++++++++++++++++++++++++----
 1 files changed, 30 insertions(+), 4 deletions(-)

diff --git a/README.md b/README.md
index 81fb1b3..c9eae96 100644
--- a/README.md
+++ b/README.md
@@ -2,8 +2,9 @@
 
 ([绠�浣撲腑鏂嘳(./README_zh.md)|English)
 
-# FunASR: A Fundamental End-to-End Speech Recognition Toolkit
+[//]: # (# FunASR: A Fundamental End-to-End Speech Recognition Toolkit)
 
+[![SVG Banners](https://svg-banners.vercel.app/api?type=origin&text1=FunASR馃&text2=馃挅%20A%20Fundamental%20End-to-End%20Speech%20Recognition%20Toolkit&width=800&height=210)](https://github.com/Akshay090/svg-banners)
 
 [![PyPI](https://img.shields.io/pypi/v/funasr)](https://pypi.org/project/funasr/)
 
@@ -59,18 +60,26 @@
 <a name="Installation"></a>
 ## Installation
 
+- Requirements
+```text
+python>=3.8
+torch>=1.13
+torchaudio
+```
+
+- Install for pypi
 ```shell
 pip3 install -U funasr
 ```
-Or install from source code
+- Or install from source code
 ``` sh
 git clone https://github.com/alibaba/FunASR.git && cd FunASR
 pip3 install -e ./
 ```
-Install modelscope for the pretrained models (Optional)
+- Install modelscope or huggingface_hub for the pretrained models (Optional)
 
 ```shell
-pip3 install -U modelscope
+pip3 install -U modelscope huggingface_hub
 ```
 
 ## Model Zoo
@@ -157,6 +166,8 @@
 ```
 Note: `chunk_size` is the configuration for streaming latency.` [0,10,5]` indicates that the real-time display granularity is `10*60=600ms`, and the lookahead information is `5*60=300ms`. Each inference input is `600ms` (sample points are `16000*0.6=960`), and the output is the corresponding text. For the last speech segment input, `is_final=True` needs to be set to force the output of the last word.
 
+<details><summary>More Examples</summary>
+
 ### Voice Activity Detection (Non-Streaming)
 ```python
 from funasr import AutoModel
@@ -215,9 +226,24 @@
 res = model.generate(input=(wav_file, text_file), data_type=("sound", "text"))
 print(res)
 ```
+
+
+### Speech Emotion Recognition
+```python
+from funasr import AutoModel
+
+model = AutoModel(model="emotion2vec_plus_large")
+
+wav_file = f"{model.model_path}/example/test.wav"
+
+res = model.generate(wav_file, output_dir="./outputs", granularity="utterance", extract_embedding=False)
+print(res)
+```
+
 More usages ref to [docs](docs/tutorial/README_zh.md), 
 more examples ref to [demo](https://github.com/alibaba-damo-academy/FunASR/tree/main/examples/industrial_data_pretraining)
 
+</details>
 
 ## Export ONNX
 

--
Gitblit v1.9.1