From 4f078d1cbd4dfd1ffce31a563cc792098174f920 Mon Sep 17 00:00:00 2001
From: 游雁 <zhifu.gzf@alibaba-inc.com>
Date: 星期四, 25 一月 2024 15:05:12 +0800
Subject: [PATCH] Merge branch 'main' of github.com:alibaba-damo-academy/FunASR add
---
README.md | 73 ++++++++++++++++++++----------------
1 files changed, 40 insertions(+), 33 deletions(-)
diff --git a/README.md b/README.md
index 05f6364..d50f27d 100644
--- a/README.md
+++ b/README.md
@@ -28,6 +28,7 @@
<a name="whats-new"></a>
## What's new:
+- 2024/01/25: Offline File Transcription Service 4.2, Offline File Transcription Service of English 1.3 released锛宱ptimized the VAD (Voice Activity Detection) data processing method, significantly reducing peak memory usage, memory leak optimization; Real-time Transcription Service 1.7 released锛宱ptimizatized the client-side锛�([docs](runtime/readme.md))
- 2024/01/09: The Funasr SDK for Windows version 2.0 has been released, featuring support for The offline file transcription service (CPU) of Mandarin 4.1, The offline file transcription service (CPU) of English 1.2, The real-time transcription service (CPU) of Mandarin 1.6. For more details, please refer to the official documentation or release notes([FunASR-Runtime-Windows](https://www.modelscope.cn/models/damo/funasr-runtime-win-cpu-x64/summary))
- 2024/01/03: File Transcription Service 4.0 released, Added support for 8k models, optimized timestamp mismatch issues and added sentence-level timestamps, improved the effectiveness of English word FST hotwords, supported automated configuration of thread parameters, and fixed known crash issues as well as memory leak problems, refer to ([docs](runtime/readme.md#file-transcription-service-mandarin-cpu)).
- 2024/01/03: Real-time Transcription Service 1.6 released锛孴he 2pass-offline mode supports Ngram language model decoding and WFST hotwords, while also addressing known crash issues and memory leak problems, ([docs](runtime/readme.md#the-real-time-transcription-service-mandarin-cpu))
@@ -55,17 +56,16 @@
(Note: 馃 represents the Huggingface model zoo link, 猸� represents the ModelScope model zoo link)
-| Model Name | Task Details | Training Data | Parameters |
-|:------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------:|:--------------------------------:|:----------:|
-| paraformer-zh <br> ([猸怾(https://www.modelscope.cn/models/damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary) [馃]() ) | speech recognition, with timestamps, non-streaming | 60000 hours, Mandarin | 220M |
-| paraformer-zh-spk <br> ( [猸怾(https://modelscope.cn/models/damo/speech_paraformer-large-vad-punc-spk_asr_nat-zh-cn/summary) [馃]() ) | speech recognition with speaker diarization, with timestamps, non-streaming | 60000 hours, Mandarin | 220M |
-| <nobr>paraformer-zh-online <br> ( [猸怾(https://modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-online/summary) [馃]() )</nobr> | speech recognition, streaming | 60000 hours, Mandarin | 220M |
-| paraformer-en <br> ( [猸怾(https://www.modelscope.cn/models/damo/speech_paraformer-large-vad-punc_asr_nat-en-16k-common-vocab10020/summary) [馃]() ) | speech recognition, with timestamps, non-streaming | 50000 hours, English | 220M |
-| paraformer-en-spk <br> ([猸怾()[馃]() ) | speech recognition with speaker diarization, non-streaming | Undo | Undo |
-| conformer-en <br> ( [猸怾(https://modelscope.cn/models/damo/speech_conformer_asr-en-16k-vocab4199-pytorch/summary) [馃]() ) | speech recognition, non-streaming | 50000 hours, English | 220M |
-| ct-punc <br> ( [猸怾(https://modelscope.cn/models/damo/punc_ct-transformer_cn-en-common-vocab471067-large/summary) [馃]() ) | punctuation restoration | 100M, Mandarin and English | 1.1G |
-| fsmn-vad <br> ( [猸怾(https://modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/summary) [馃]() ) | voice activity detection | 5000 hours, Mandarin and English | 0.4M |
-| fa-zh <br> ( [猸怾(https://modelscope.cn/models/damo/speech_timestamp_prediction-v1-16k-offline/summary) [馃]() ) | timestamp prediction | 5000 hours, Mandarin | 38M |
+| Model Name | Task Details | Training Data | Parameters |
+|:------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------:|:--------------------------------:|:----------:|
+| paraformer-zh <br> ([猸怾(https://www.modelscope.cn/models/damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary) [馃]() ) | speech recognition, with timestamps, non-streaming | 60000 hours, Mandarin | 220M |
+| <nobr>paraformer-zh-online <br> ( [猸怾(https://modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-online/summary) [馃]() )</nobr> | speech recognition, streaming | 60000 hours, Mandarin | 220M |
+| paraformer-en <br> ( [猸怾(https://www.modelscope.cn/models/damo/speech_paraformer-large-vad-punc_asr_nat-en-16k-common-vocab10020/summary) [馃]() ) | speech recognition, with timestamps, non-streaming | 50000 hours, English | 220M |
+| conformer-en <br> ( [猸怾(https://modelscope.cn/models/damo/speech_conformer_asr-en-16k-vocab4199-pytorch/summary) [馃]() ) | speech recognition, non-streaming | 50000 hours, English | 220M |
+| ct-punc <br> ( [猸怾(https://modelscope.cn/models/damo/punc_ct-transformer_cn-en-common-vocab471067-large/summary) [馃]() ) | punctuation restoration | 100M, Mandarin and English | 1.1G |
+| fsmn-vad <br> ( [猸怾(https://modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/summary) [馃]() ) | voice activity detection | 5000 hours, Mandarin and English | 0.4M |
+| fa-zh <br> ( [猸怾(https://modelscope.cn/models/damo/speech_timestamp_prediction-v1-16k-offline/summary) [馃]() ) | timestamp prediction | 5000 hours, Mandarin | 38M |
+| cam++ <br> ( [猸怾(https://modelscope.cn/models/iic/speech_campplus_sv_zh-cn_16k-common/summary) [馃]() ) | speaker verification/diarization | 5000 hours | 7.2M |
@@ -92,13 +92,14 @@
from funasr import AutoModel
# paraformer-zh is a multi-functional asr model
# use vad, punc, spk or not as you need
-model = AutoModel(model="paraformer-zh", model_revision="v2.0.2", \
- vad_model="fsmn-vad", vad_model_revision="v2.0.2", \
- punc_model="ct-punc-c", punc_model_revision="v2.0.2", \
- spk_model="cam++", spk_model_revision="v2.0.2")
-res = model(input=f"{model.model_path}/example/asr_example.wav",
- batch_size=16,
- hotword='榄旀惌')
+model = AutoModel(model="paraformer-zh", model_revision="v2.0.4",
+ vad_model="fsmn-vad", vad_model_revision="v2.0.4",
+ punc_model="ct-punc-c", punc_model_revision="v2.0.4",
+ # spk_model="cam++", spk_model_revision="v2.0.2",
+ )
+res = model.generate(input=f"{model.model_path}/example/asr_example.wav",
+ batch_size_s=300,
+ hotword='榄旀惌')
print(res)
```
Note: `model_hub`: represents the model repository, `ms` stands for selecting ModelScope download, `hf` stands for selecting Huggingface download.
@@ -111,7 +112,7 @@
encoder_chunk_look_back = 4 #number of chunks to lookback for encoder self-attention
decoder_chunk_look_back = 1 #number of encoder chunks to lookback for decoder cross-attention
-model = AutoModel(model="paraformer-zh-streaming", model_revision="v2.0.2")
+model = AutoModel(model="paraformer-zh-streaming", model_revision="v2.0.4")
import soundfile
import os
@@ -125,7 +126,7 @@
for i in range(total_chunk_num):
speech_chunk = speech[i*chunk_stride:(i+1)*chunk_stride]
is_final = i == total_chunk_num - 1
- res = model(input=speech_chunk, cache=cache, is_final=is_final, chunk_size=chunk_size, encoder_chunk_look_back=encoder_chunk_look_back, decoder_chunk_look_back=decoder_chunk_look_back)
+ res = model.generate(input=speech_chunk, cache=cache, is_final=is_final, chunk_size=chunk_size, encoder_chunk_look_back=encoder_chunk_look_back, decoder_chunk_look_back=decoder_chunk_look_back)
print(res)
```
Note: `chunk_size` is the configuration for streaming latency.` [0,10,5]` indicates that the real-time display granularity is `10*60=600ms`, and the lookahead information is `5*60=300ms`. Each inference input is `600ms` (sample points are `16000*0.6=960`), and the output is the corresponding text. For the last speech segment input, `is_final=True` needs to be set to force the output of the last word.
@@ -134,10 +135,9 @@
```python
from funasr import AutoModel
-model = AutoModel(model="fsmn-vad", model_revision="v2.0.2")
-
+model = AutoModel(model="fsmn-vad", model_revision="v2.0.4")
wav_file = f"{model.model_path}/example/asr_example.wav"
-res = model(input=wav_file)
+res = model.generate(input=wav_file)
print(res)
```
### Voice Activity Detection (Non-streaming)
@@ -145,7 +145,7 @@
from funasr import AutoModel
chunk_size = 200 # ms
-model = AutoModel(model="fsmn-vad", model_revision="v2.0.2")
+model = AutoModel(model="fsmn-vad", model_revision="v2.0.4")
import soundfile
@@ -158,7 +158,7 @@
for i in range(total_chunk_num):
speech_chunk = speech[i*chunk_stride:(i+1)*chunk_stride]
is_final = i == total_chunk_num - 1
- res = model(input=speech_chunk, cache=cache, is_final=is_final, chunk_size=chunk_size)
+ res = model.generate(input=speech_chunk, cache=cache, is_final=is_final, chunk_size=chunk_size)
if len(res[0]["value"]):
print(res)
```
@@ -166,22 +166,23 @@
```python
from funasr import AutoModel
-model = AutoModel(model="ct-punc", model_revision="v2.0.2")
-
-res = model(input="閭d粖澶╃殑浼氬氨鍒拌繖閲屽惂 happy new year 鏄庡勾瑙�")
+model = AutoModel(model="ct-punc", model_revision="v2.0.4")
+res = model.generate(input="閭d粖澶╃殑浼氬氨鍒拌繖閲屽惂 happy new year 鏄庡勾瑙�")
print(res)
```
### Timestamp Prediction
```python
from funasr import AutoModel
-model = AutoModel(model="fa-zh", model_revision="v2.0.2")
-
+model = AutoModel(model="fa-zh", model_revision="v2.0.4")
wav_file = f"{model.model_path}/example/asr_example.wav"
-text_file = f"{model.model_path}/example/asr_example.wav"
-res = model(input=(wav_file, text_file), data_type=("sound", "text"))
+text_file = f"{model.model_path}/example/text.txt"
+res = model.generate(input=(wav_file, text_file), data_type=("sound", "text"))
print(res)
```
+
+More examples ref to [docs](https://github.com/alibaba-damo-academy/FunASR/tree/main/examples/industrial_data_pretraining)
+
[//]: # (FunASR supports inference and fine-tuning of models trained on industrial datasets of tens of thousands of hours. For more details, please refer to ([modelscope_egs](https://alibaba-damo-academy.github.io/FunASR/en/modelscope_pipeline/quick_start.html)). It also supports training and fine-tuning of models on academic standard datasets. For more details, please refer to([egs](https://alibaba-damo-academy.github.io/FunASR/en/academic_recipe/asr_recipe.html)). The models include speech recognition (ASR), speech activity detection (VAD), punctuation recovery, language model, speaker verification, speaker separation, and multi-party conversation speech recognition. For a detailed list of models, please refer to the [Model Zoo](https://github.com/alibaba-damo-academy/FunASR/blob/main/docs/model_zoo/modelscope_models.md):)
## Deployment Service
@@ -233,10 +234,16 @@
}
@inproceedings{gao22b_interspeech,
author={Zhifu Gao and ShiLiang Zhang and Ian McLoughlin and Zhijie Yan},
- title={{Paraformer: Fast and Accurate Parallel Transformer for Non-autoregressive End-to-End Speech Recognition}},
+ title={Paraformer: Fast and Accurate Parallel Transformer for Non-autoregressive End-to-End Speech Recognition},
year=2022,
booktitle={Proc. Interspeech 2022},
pages={2063--2067},
doi={10.21437/Interspeech.2022-9996}
}
+@inproceedings{shi2023seaco,
+ author={Xian Shi and Yexin Yang and Zerui Li and Yanni Chen and Zhifu Gao and Shiliang Zhang},
+ title={SeACo-Paraformer: A Non-Autoregressive ASR System with Flexible and Effective Hotword Customization Ability},
+ year={2023},
+ booktitle={ICASSP2024}
+}
```
--
Gitblit v1.9.1