From 4413d1eb47fa400277c8a9625aa0bd5c424a2fab Mon Sep 17 00:00:00 2001
From: zhaomingwork <61895407+zhaomingwork@users.noreply.github.com>
Date: 星期四, 06 七月 2023 10:13:32 +0800
Subject: [PATCH] Python ws client slow problem for multiple files in offline mode (#716)

---
 docs/reference/papers.md |    4 +++-
 1 files changed, 3 insertions(+), 1 deletions(-)

diff --git a/docs/reference/papers.md b/docs/reference/papers.md
index 33bf72f..22da0db 100644
--- a/docs/reference/papers.md
+++ b/docs/reference/papers.md
@@ -3,7 +3,9 @@
 FunASR have implemented the following paper code
 
 ### Speech Recognition
-- [Paraformer: Fast and Accurate Parallel Transformer for Non-autoregressive End-to-End Speech Recognition](https://arxiv.org/abs/2206.08317), INTERSPEECH 2022.
+- [FunASR: A Fundamental End-to-End Speech Recognition Toolkit](https://arxiv.org/abs/2305.11013), INTERSPEECH 2023
+- [BAT: Boundary aware transducer for memory-efficient and low-latency ASR](https://arxiv.org/abs/2305.11571), INTERSPEECH 2023
+- [Paraformer: Fast and Accurate Parallel Transformer for Non-autoregressive End-to-End Speech Recognition](https://arxiv.org/abs/2206.08317), INTERSPEECH 2022
 - [Universal ASR: Unifying Streaming and Non-Streaming ASR Using a Single Encoder-Decoder Model](https://arxiv.org/abs/2010.14099), arXiv preprint arXiv:2010.14099, 2020.
 - [San-m: Memory equipped self-attention for end-to-end speech recognition](https://arxiv.org/pdf/2006.01713), INTERSPEECH 2020
 - [Streaming Chunk-Aware Multihead Attention for Online End-to-End Speech Recognition](https://arxiv.org/abs/2006.01712), INTERSPEECH 2020

--
Gitblit v1.9.1