From 48ee4dd536242f25bb157c7962941c3661b4286b Mon Sep 17 00:00:00 2001
From: yhliang <429259365@qq.com>
Date: 星期三, 10 五月 2023 17:12:30 +0800
Subject: [PATCH] fix format error

---
 docs/m2met2/_build/html/_sources/Baseline.md.txt |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/docs/m2met2/_build/html/_sources/Baseline.md.txt b/docs/m2met2/_build/html/_sources/Baseline.md.txt
index a782086..cdaff8a 100644
--- a/docs/m2met2/_build/html/_sources/Baseline.md.txt
+++ b/docs/m2met2/_build/html/_sources/Baseline.md.txt
@@ -6,7 +6,7 @@
 
 ## Quick start
 To run the baseline, first you need to install FunASR and ModelScope. ([installation](https://alibaba-damo-academy.github.io/FunASR/en/installation.html))  
-There are two startup scripts, `run.sh` for training and evaluating on the old eval and test sets, and `run_m2met_2023_infer.sh` for inference on the new test set of the Multi-Channel Multi-Party Meeting Transcription 2.0 ([M2MET2.0](https://alibaba-damo-academy.github.io/FunASR/m2met2/index.html)) Challenge.  
+There are two startup scripts, `run.sh` for training and evaluating on the old eval and test sets, and `run_m2met_2023_infer.sh` for inference on the new test set of the Multi-Channel Multi-Party Meeting Transcription 2.0 ([M2MeT2.0](https://alibaba-damo-academy.github.io/FunASR/m2met2/index.html)) Challenge.  
 Before running `run.sh`, you must manually download and unpack the [AliMeeting](http://www.openslr.org/119/) corpus and place it in the `./dataset` directory:
 ```shell
 dataset
@@ -16,6 +16,7 @@
 |鈥斺�� Test_Ali_near
 |鈥斺�� Train_Ali_far
 |鈥斺�� Train_Ali_near
+```
 Before running `run_m2met_2023_infer.sh`, you need to place the new test set `Test_2023_Ali_far` (to be released after the challenge starts) in the `./dataset` directory, which contains only raw audios. Then put the given `wav.scp`, `wav_raw.scp`, `segments`, `utt2spk` and `spk2utt` in the `./data/Test_2023_Ali_far` directory.  
 ```shell
 data/Test_2023_Ali_far

--
Gitblit v1.9.1