From 1dcdd5f8a618ba7ea7eb8d7c9b5f3d0acf5a3d9d Mon Sep 17 00:00:00 2001
From: zhifu gao <zhifu.gzf@alibaba-inc.com>
Date: 星期二, 25 七月 2023 14:07:27 +0800
Subject: [PATCH] Update readme.md

---
 docs/reference/papers.md |    4 +++-
 1 files changed, 3 insertions(+), 1 deletions(-)

diff --git a/docs/reference/papers.md b/docs/reference/papers.md
index 33bf72f..22da0db 100644
--- a/docs/reference/papers.md
+++ b/docs/reference/papers.md
@@ -3,7 +3,9 @@
 FunASR have implemented the following paper code
 
 ### Speech Recognition
-- [Paraformer: Fast and Accurate Parallel Transformer for Non-autoregressive End-to-End Speech Recognition](https://arxiv.org/abs/2206.08317), INTERSPEECH 2022.
+- [FunASR: A Fundamental End-to-End Speech Recognition Toolkit](https://arxiv.org/abs/2305.11013), INTERSPEECH 2023
+- [BAT: Boundary aware transducer for memory-efficient and low-latency ASR](https://arxiv.org/abs/2305.11571), INTERSPEECH 2023
+- [Paraformer: Fast and Accurate Parallel Transformer for Non-autoregressive End-to-End Speech Recognition](https://arxiv.org/abs/2206.08317), INTERSPEECH 2022
 - [Universal ASR: Unifying Streaming and Non-Streaming ASR Using a Single Encoder-Decoder Model](https://arxiv.org/abs/2010.14099), arXiv preprint arXiv:2010.14099, 2020.
 - [San-m: Memory equipped self-attention for end-to-end speech recognition](https://arxiv.org/pdf/2006.01713), INTERSPEECH 2020
 - [Streaming Chunk-Aware Multihead Attention for Online End-to-End Speech Recognition](https://arxiv.org/abs/2006.01712), INTERSPEECH 2020

--
Gitblit v1.9.1