From 2a66366be4c2715870e4859fd5a5db6e8a9dc00a Mon Sep 17 00:00:00 2001
From: chenmengzheAAA <123789350+chenmengzheAAA@users.noreply.github.com>
Date: 星期四, 14 九月 2023 19:00:17 +0800
Subject: [PATCH] Merge pull request #956 from alibaba-damo-academy/chenmengzheAAA-patch-4
---
egs_modelscope/asr/TEMPLATE/README.md | 10 ++++++----
1 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/egs_modelscope/asr/TEMPLATE/README.md b/egs_modelscope/asr/TEMPLATE/README.md
index cf0ba84..a8cb486 100644
--- a/egs_modelscope/asr/TEMPLATE/README.md
+++ b/egs_modelscope/asr/TEMPLATE/README.md
@@ -1,3 +1,5 @@
+([绠�浣撲腑鏂嘳(./README_zh.md)|English)
+
# Speech Recognition
> **Note**:
@@ -230,10 +232,10 @@
- `batch_bins`: batch size. For dataset_type is `small`, `batch_bins` indicates the feature frames. For dataset_type is `large`, `batch_bins` indicates the duration in ms
- `max_epoch`: number of training epoch
- `lr`: learning rate
- - `init_param`: init model path, load modelscope model initialization by default. For example: ["checkpoint/20epoch.pb"]
- - `freeze_param`: Freeze model parameters. For example锛歔"encoder"]
- - `ignore_init_mismatch`: Ignore size mismatch when loading pre-trained model
- - `use_lora`: Fine-tuning model use lora, more detail please refer to [LORA](https://arxiv.org/pdf/2106.09685.pdf)
+ - `init_param`: `[]`(Default), init model path, load modelscope model initialization by default. For example: ["checkpoint/20epoch.pb"]
+ - `freeze_param`: `[]`(Default), Freeze model parameters. For example锛歔"encoder"]
+ - `ignore_init_mismatch`: `True`(Default), Ignore size mismatch when loading pre-trained model
+ - `use_lora`: `False`(Default), Fine-tuning model use lora, more detail please refer to [LORA](https://arxiv.org/pdf/2106.09685.pdf)
- Training data formats锛�
```sh
--
Gitblit v1.9.1