From 528f92f7a2a26cade1c57ccf26b0ba6524e7cae5 Mon Sep 17 00:00:00 2001
From: TnR2 <115166373+TnR2@users.noreply.github.com>
Date: 星期三, 01 十月 2025 14:45:17 +0800
Subject: [PATCH] fix: handle empty strings after event removal in transcription processing (def rich_transcription_postprocess(s)) (#2681)

---
 docs/tutorial/README.md |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/docs/tutorial/README.md b/docs/tutorial/README.md
index 933b611..3a3886e 100644
--- a/docs/tutorial/README.md
+++ b/docs/tutorial/README.md
@@ -38,7 +38,7 @@
 model = AutoModel(model=[str], device=[str], ncpu=[int], output_dir=[str], batch_size=[int], hub=[str], **kwargs)
 ```
 - `model`(str): model name in the [Model Repository](https://github.com/alibaba-damo-academy/FunASR/tree/main/model_zoo), or a model path on local disk.
-- `device`(str): `cuda:0` (default gpu0) for using GPU for inference, specify `cpu` for using CPU.
+- `device`(str): `cuda:0` (default gpu0) for using GPU for inference, specify `cpu` for using CPU. `mps`: Mac computers with M-series chips use MPS for inference. `xpu`: Uses Intel GPU for inference.
 - `ncpu`(int): `4` (default), sets the number of threads for CPU internal operations.
 - `output_dir`(str): `None` (default), set this to specify the output path for the results.
 - `batch_size`(int): `1` (default), the number of samples per batch during decoding.

--
Gitblit v1.9.1