From 480b50a329168aa1a1c2c59032ad2faae50e07c6 Mon Sep 17 00:00:00 2001
From: yhliang <429259365@qq.com>
Date: 星期四, 13 四月 2023 10:37:53 +0800
Subject: [PATCH] support math

---
 docs_m2met2/Introduction.md                 |   14 +++++++-------
 docs_m2met2/conf.py                         |   26 ++++++++++++--------------
 docs_m2met2/Track_setting_and_evaluation.md |    6 ++++--
 docs_m2met2/index.rst                       |    7 +++++++
 4 files changed, 30 insertions(+), 23 deletions(-)

diff --git a/docs_m2met2/Introduction.md b/docs_m2met2/Introduction.md
index dabc643..1362691 100644
--- a/docs_m2met2/Introduction.md
+++ b/docs_m2met2/Introduction.md
@@ -8,16 +8,16 @@
 
 ## Timeline(AOE Time)
 
-- **$May~5^{th}, 2023:$** Registration deadline, the due date for participants to join the Challenge.
-- **$June~9^{th}, 2023:$** Test data release.
-- **$June~13^{rd}, 2023:$** Final submission deadline.
-- **$June~19^{th}, 2023:$** Evaluation result and ranking release.
-- **$July~3^{rd}, 2023:$** Deadline for paper submission.
-- **$July~10^{th}, 2023:$** Deadline for final paper submission.
+- $ May~5^{th}, 2023: $ Registration deadline, the due date for participants to join the Challenge.
+- $ June~9^{th}, 2023: $ Test data release.
+- $ June~13^{rd}, 2023: $ Final submission deadline.
+- $ June~19^{th}, 2023: $ Evaluation result and ranking release.
+- $ July~3^{rd}, 2023: $ Deadline for paper submission.
+- $ July~10^{th}, 2023: $ Deadline for final paper submission.
 
 ## Guidelines
 
-Potential participants from both academia and industry should send an email to **m2met.alimeeting@gmail.com** to register to the challenge before or by April 21 with the following requirements:
+Potential participants from both academia and industry should send an email to **m2met.alimeeting@gmail.com** to register to the challenge before or by May 5 with the following requirements:
 
 
 - Email subject: [ASRU2023 M2MeT2.0 Challenge Registration] 鈥� Team Name - Participating 
diff --git a/docs_m2met2/Track_setting_and_evaluation.md b/docs_m2met2/Track_setting_and_evaluation.md
index 4ff971d..ff4fa18 100644
--- a/docs_m2met2/Track_setting_and_evaluation.md
+++ b/docs_m2met2/Track_setting_and_evaluation.md
@@ -1,9 +1,11 @@
-# Speaker-Attributed ASR (Main Track)
-## Overview
+# Track_setting and evaluation 
+## Speaker-Attributed ASR (Main Track)
 The speaker-attribute ASR task presents the challenge of transcribing the speech of each individual speaker from overlapped speech and assigning a speaker label to the transcription. In this track, the AliMeeting, Aishell4, and Cn-Celeb datasets can be used as constrained data sources. The AliMeeting dataset, which is used in the M2MeT challenge, contains Train, Eval, and Test sets that can be utilized during both training and evaluation. Additionally, a new Test-2023 set containing about 10 hours of meeting data will be released soon (according to the timeline) for challenge scoring and ranking. It is important to note that the organizers will not provide the headset near-field audio, transcriptions as well as oracle timestamps. Instead of providing oracle timestamps of each speaker, segments containing multiple speakers are provided on the Test-2023 set. These segments can be obtained using a simple vad model.
 ## Evaluation metric
 The accuracy of a speaker-attributed ASR system is evaluated using the concatenated minimum permutation character error rate (cpCER) metric. The calculation of cpCER involves three steps. Firstly, the reference and hypothesis transcriptions from each speaker in a session are concatenated in chronological order. Secondly, the character error rate (CER) is calculated between the concatenated reference and hypothesis transcriptions, and this process is repeated for all possible speaker permutations. Finally, the permutation with the lowest CER is selected as the cpCER for that session. TThe CER is obtained by dividing the total number of insertions (Ins), substitutions (Sub), and deletions(Del) of characters required to transform the ASR output into the reference transcript by the total number of characters in the reference transcript. Specifically, CER is calculated by:
+
 $$\text{CER} = \frac {\mathcal N_{\text{Ins}} + \mathcal N_{\text{Sub}} + \mathcal N_{\text{Del}} }{\mathcal N_{\text{Total}}} \times 100\%,$$
+
 where $\mathcal N_{\text{Ins}}$, $\mathcal N_{\text{Sub}}$, $\mathcal N_{\text{Del}}$ are the character number of the three errors, and $\mathcal N_{\text{Total}}$ is the total number of characters.
 ## Sub-track arrangement
 ### Sub-track I (Fixed Training Condition):
diff --git a/docs_m2met2/conf.py b/docs_m2met2/conf.py
index 47f98e5..1acd230 100644
--- a/docs_m2met2/conf.py
+++ b/docs_m2met2/conf.py
@@ -12,24 +12,22 @@
 
 # -- General configuration ---------------------------------------------------
 # https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
-
 extensions = [
-    'nbsphinx',
+    'myst_parser',
     'sphinx_rtd_theme',
-    "sphinx.ext.autodoc",
-    'sphinx.ext.napoleon',
-    'sphinx.ext.viewcode',
-    "sphinx.ext.mathjax",
-    "sphinx.ext.todo",
-    "sphinx_markdown_tables",
-    "sphinx.ext.githubpages",
-    'recommonmark',
 ]
-source_suffix = [".rst", ".md"]
+
+myst_enable_extensions = [
+    "colon_fence",
+    "deflist",
+    "dollarmath",
+]
+
+myst_heading_anchors = 2
+myst_highlight_code_blocks=True
+myst_update_mathjax=False
 templates_path = ['_templates']
-# exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
-exclude_patterns = []
-pygments_style = "sphinx"
+exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
 
 
 # -- Options for HTML output -------------------------------------------------
diff --git a/docs_m2met2/index.rst b/docs_m2met2/index.rst
index d4db191..74217c2 100644
--- a/docs_m2met2/index.rst
+++ b/docs_m2met2/index.rst
@@ -19,3 +19,10 @@
    ./Baseline
    ./Rules
    ./Organizers
+
+Indices and tables
+==================
+
+* :ref:`genindex`
+* :ref:`modindex`
+* :ref:`search`
\ No newline at end of file

--
Gitblit v1.9.1