From 5fec3c9e58fceda85fa2daf7deec2492372dac8a Mon Sep 17 00:00:00 2001
From: Chong Zhang <iriszhangchong@gmail.com>
Date: 星期二, 23 五月 2023 17:01:47 +0800
Subject: [PATCH] Update modelscope_models.md

---
 docs/m2met2/_build/html/Baseline.html |   13 +++++++------
 1 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/docs/m2met2/_build/html/Baseline.html b/docs/m2met2/_build/html/Baseline.html
index 4f91c6b..c578602 100644
--- a/docs/m2met2/_build/html/Baseline.html
+++ b/docs/m2met2/_build/html/Baseline.html
@@ -131,8 +131,8 @@
 </section>
 <section id="quick-start">
 <h2>Quick start<a class="headerlink" href="#quick-start" title="Permalink to this heading">露</a></h2>
-<p>To run the baseline, first you need to install FunASR and ModelScope. (<a class="reference external" href="https://alibaba-damo-academy.github.io/FunASR/en/installation.html">installation</a>)<br />
-There are two startup scripts, <code class="docutils literal notranslate"><span class="pre">run.sh</span></code> for training and evaluating on the old eval and test sets, and <code class="docutils literal notranslate"><span class="pre">run_m2met_2023_infer.sh</span></code> for inference on the new test set of the Multi-Channel Multi-Party Meeting Transcription 2.0 (<a class="reference external" href="https://alibaba-damo-academy.github.io/FunASR/m2met2/index.html">M2MET2.0</a>) Challenge.<br />
+<p>To run the baseline, first you need to install FunASR and ModelScope. (<a class="reference external" href="https://github.com/alibaba-damo-academy/FunASR#installation">installation</a>)<br />
+There are two startup scripts, <code class="docutils literal notranslate"><span class="pre">run.sh</span></code> for training and evaluating on the old eval and test sets, and <code class="docutils literal notranslate"><span class="pre">run_m2met_2023_infer.sh</span></code> for inference on the new test set of the Multi-Channel Multi-Party Meeting Transcription 2.0 (<a class="reference external" href="https://alibaba-damo-academy.github.io/FunASR/m2met2/index.html">M2MeT2.0</a>) Challenge.<br />
 Before running <code class="docutils literal notranslate"><span class="pre">run.sh</span></code>, you must manually download and unpack the <a class="reference external" href="http://www.openslr.org/119/">AliMeeting</a> corpus and place it in the <code class="docutils literal notranslate"><span class="pre">./dataset</span></code> directory:</p>
 <div class="highlight-shell notranslate"><div class="highlight"><pre><span></span>dataset
 <span class="p">|</span>鈥斺��<span class="w"> </span>Eval_Ali_far
@@ -141,9 +141,10 @@
 <span class="p">|</span>鈥斺��<span class="w"> </span>Test_Ali_near
 <span class="p">|</span>鈥斺��<span class="w"> </span>Train_Ali_far
 <span class="p">|</span>鈥斺��<span class="w"> </span>Train_Ali_near
-Before<span class="w"> </span>running<span class="w"> </span><span class="sb">`</span>run_m2met_2023_infer.sh<span class="sb">`</span>,<span class="w"> </span>you<span class="w"> </span>need<span class="w"> </span>to<span class="w"> </span>place<span class="w"> </span>the<span class="w"> </span>new<span class="w"> </span><span class="nb">test</span><span class="w"> </span><span class="nb">set</span><span class="w"> </span><span class="sb">`</span>Test_2023_Ali_far<span class="sb">`</span><span class="w"> </span><span class="o">(</span>to<span class="w"> </span>be<span class="w"> </span>released<span class="w"> </span>after<span class="w"> </span>the<span class="w"> </span>challenge<span class="w"> </span>starts<span class="o">)</span><span class="w"> </span><span class="k">in</span><span class="w"> </span>the<span class="w"> </span><span class="sb">`</span>./dataset<span class="sb">`</span><span class="w"> </span>directory,<span class="w"> </span>which<span class="w"> </span>contains<span class="w"> </span>only<span class="w"> </span>raw<span class="w"> </span>audios.<span class="w"> </span>Then<span class="w"> </span>put<span class="w"> </span>the<span class="w"> </span>given<span class="w"> </span><span class="sb">`</span>wav.scp<span class="sb">`</span>,<span class="w"> </span><span class="sb">`</span>wav_raw.scp<span class="sb">`</span>,<span class="w"> </span><span class="sb">`</span>segments<span class="sb">`</span>,<span class="w"> </span><span class="sb">`</span>utt2spk<span class="sb">`</span><span class="w"> </span>and<span class="w"> </span><span class="sb">`</span>spk2utt<span class="sb">`</span><span class="w"> </span><span class="k">in</span><span class="w"> </span>the<span class="w"> </span><span class="sb">`</span>./data/Test_2023_Ali_far<span class="sb">`</span><span class="w"> </span>directory.<span class="w">  </span>
-<span class="sb">```</span>shell
-data/Test_2023_Ali_far
+</pre></div>
+</div>
+<p>Before running <code class="docutils literal notranslate"><span class="pre">run_m2met_2023_infer.sh</span></code>, you need to place the new test set <code class="docutils literal notranslate"><span class="pre">Test_2023_Ali_far</span></code> (to be released after the challenge starts) in the <code class="docutils literal notranslate"><span class="pre">./dataset</span></code> directory, which contains only raw audios. Then put the given <code class="docutils literal notranslate"><span class="pre">wav.scp</span></code>, <code class="docutils literal notranslate"><span class="pre">wav_raw.scp</span></code>, <code class="docutils literal notranslate"><span class="pre">segments</span></code>, <code class="docutils literal notranslate"><span class="pre">utt2spk</span></code> and <code class="docutils literal notranslate"><span class="pre">spk2utt</span></code> in the <code class="docutils literal notranslate"><span class="pre">./data/Test_2023_Ali_far</span></code> directory.</p>
+<div class="highlight-shell notranslate"><div class="highlight"><pre><span></span>data/Test_2023_Ali_far
 <span class="p">|</span>鈥斺��<span class="w"> </span>wav.scp
 <span class="p">|</span>鈥斺��<span class="w"> </span>wav_raw.scp
 <span class="p">|</span>鈥斺��<span class="w"> </span>segments
@@ -156,7 +157,7 @@
 <section id="baseline-results">
 <h2>Baseline results<a class="headerlink" href="#baseline-results" title="Permalink to this heading">露</a></h2>
 <p>The results of the baseline system are shown in Table 3. The speaker profile adopts the oracle speaker embedding during training. However, due to the lack of oracle speaker label during evaluation, the speaker profile provided by an additional spectral clustering is used. Meanwhile, the results of using the oracle speaker profile on Eval and Test Set are also provided to show the impact of speaker profile accuracy.</p>
-<p><img alt="baseline result" src="_images/baseline_result.png" /></p>
+<p><img alt="baseline_result" src="_images/baseline_result.png" /></p>
 </section>
 </section>
 

--
Gitblit v1.9.1