From 3c3754dcc7568e76fa7d4b2c4e14849f68cc6ee7 Mon Sep 17 00:00:00 2001 From: 嘉渊 <wangjiaming.wjm@alibaba-inc.com> Date: 星期日, 28 五月 2023 23:46:01 +0800 Subject: [PATCH] update repo --- egs/alimeeting/sa-asr/README.md | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/egs/alimeeting/sa-asr/README.md b/egs/alimeeting/sa-asr/README.md index 951670b..2ef6bbe 100644 --- a/egs/alimeeting/sa-asr/README.md +++ b/egs/alimeeting/sa-asr/README.md @@ -1,6 +1,6 @@ # Get Started Speaker Attributed Automatic Speech Recognition (SA-ASR) is a task proposed to solve "who spoke what". Specifically, the goal of SA-ASR is not only to obtain multi-speaker transcriptions, but also to identify the corresponding speaker for each utterance. The method used in this example is referenced in the paper: [End-to-End Speaker-Attributed ASR with Transformer](https://www.isca-speech.org/archive/pdfs/interspeech_2021/kanda21b_interspeech.pdf). -To run this receipe, first you need to install FunASR and ModelScope. ([installation](https://alibaba-damo-academy.github.io/FunASR/en/installation.html)) +To run this receipe, first you need to install FunASR and ModelScope. ([installation](https://github.com/alibaba-damo-academy/FunASR#installation)) There are two startup scripts, `run.sh` for training and evaluating on the old eval and test sets, and `run_m2met_2023_infer.sh` for inference on the new test set of the Multi-Channel Multi-Party Meeting Transcription 2.0 ([M2MeT2.0](https://alibaba-damo-academy.github.io/FunASR/m2met2/index.html)) Challenge. Before running `run.sh`, you must manually download and unpack the [AliMeeting](http://www.openslr.org/119/) corpus and place it in the `./dataset` directory: ```shell @@ -12,7 +12,7 @@ |鈥斺�� Train_Ali_far |鈥斺�� Train_Ali_near ``` -There are 18 stages in `run.sh`: +There are 16 stages in `run.sh`: ```shell stage 1 - 5: Data preparation and processing. stage 6: Generate speaker profiles (Stage 6 takes a lot of time). -- Gitblit v1.9.1