From 82e5ca37a8bd80f56c99f9d790a03b458ced716b Mon Sep 17 00:00:00 2001
From: 游雁 <zhifu.gzf@alibaba-inc.com>
Date: 星期二, 25 二月 2025 14:28:34 +0800
Subject: [PATCH] Large-Scale Data Training
---
docs/tutorial/README_zh.md | 27 +++++++++++++++++++++++----
docs/tutorial/README.md | 30 ++++++++++++++++++++++++++----
2 files changed, 49 insertions(+), 8 deletions(-)
diff --git a/docs/tutorial/README.md b/docs/tutorial/README.md
index 74febcd..933b611 100644
--- a/docs/tutorial/README.md
+++ b/docs/tutorial/README.md
@@ -211,7 +211,7 @@
### Detailed Parameter Description:
```shell
-funasr/bin/train.py \
+funasr/bin/train_ds.py \
++model="${model_name_or_model_dir}" \
++train_data_set_list="${train_data}" \
++valid_data_set_list="${val_data}" \
@@ -252,7 +252,7 @@
gpu_num=$(echo $CUDA_VISIBLE_DEVICES | awk -F "," '{print NF}')
torchrun --nnodes 1 --nproc_per_node ${gpu_num} \
-../../../funasr/bin/train.py ${train_args}
+../../../funasr/bin/train_ds.py ${train_args}
```
--nnodes represents the total number of participating nodes, while --nproc_per_node indicates the number of processes running on each node.
@@ -264,7 +264,7 @@
gpu_num=$(echo $CUDA_VISIBLE_DEVICES | awk -F "," '{print NF}')
torchrun --nnodes 2 --node_rank 0 --nproc_per_node ${gpu_num} --master_addr=192.168.1.1 --master_port=12345 \
-../../../funasr/bin/train.py ${train_args}
+../../../funasr/bin/train_ds.py ${train_args}
```
On the worker node (assuming the IP is 192.168.1.2), you need to ensure that the MASTER_ADDR and MASTER_PORT environment variables are set to match those of the master node, and then run the same command:
@@ -273,7 +273,7 @@
gpu_num=$(echo $CUDA_VISIBLE_DEVICES | awk -F "," '{print NF}')
torchrun --nnodes 2 --node_rank 1 --nproc_per_node ${gpu_num} --master_addr=192.168.1.1 --master_port=12345 \
-../../../funasr/bin/train.py ${train_args}
+../../../funasr/bin/train_ds.py ${train_args}
```
--nnodes indicates the total number of nodes participating in the training, --node_rank represents the ID of the current node, and --nproc_per_node specifies the number of processes running on each node (usually corresponds to the number of GPUs).
@@ -321,6 +321,28 @@
++jsonl_file_in="../../../data/list/train.jsonl"
```
+
+#### Large-Scale Data Training
+When dealing with large datasets (e.g., 50,000 hours or more), memory issues may arise, especially in multi-GPU experiments. To address this, split the JSONL files into slices, write them into a TXT file (one slice per line), and set `data_split_num`. For example:
+```shell
+train_data="/root/data/list/data.list"
+
+funasr/bin/train_ds.py \
+++train_data_set_list="${train_data}" \
+++dataset_conf.data_split_num=256
+```
+**Details:**
+- `data.list`: A plain text file listing the split JSONL files. For example, the content of `data.list` might be:
+ ```bash
+ data/list/train.0.jsonl
+ data/list/train.1.jsonl
+ ...
+ ```
+- `data_split_num`: Specifies the number of slice groups. For instance, if `data.list` contains 512 lines and `data_split_num=256`, the data will be divided into 256 groups, each containing 2 JSONL files. This ensures that only 2 JSONL files are loaded for training at a time, reducing memory usage during training. Note: Groups are created sequentially.
+
+**Recommendation:**
+If the dataset is extremely large and contains heterogeneous data types, perform **data balancing** during splitting to ensure uniformity across groups.
+
#### Training log
##### log.txt
diff --git a/docs/tutorial/README_zh.md b/docs/tutorial/README_zh.md
index 5bf15f0..6cbd9c3 100644
--- a/docs/tutorial/README_zh.md
+++ b/docs/tutorial/README_zh.md
@@ -213,7 +213,7 @@
### 璇︾粏鍙傛暟浠嬬粛
```shell
-funasr/bin/train.py \
+funasr/bin/train_ds.py \
++model="${model_name_or_model_dir}" \
++train_data_set_list="${train_data}" \
++valid_data_set_list="${val_data}" \
@@ -258,7 +258,7 @@
gpu_num=$(echo $CUDA_VISIBLE_DEVICES | awk -F "," '{print NF}')
torchrun --nnodes 1 --nproc_per_node ${gpu_num} \
-../../../funasr/bin/train.py ${train_args}
+../../../funasr/bin/train_ds.py ${train_args}
```
--nnodes 琛ㄧず鍙備笌鐨勮妭鐐规�绘暟锛�--nproc_per_node 琛ㄧず姣忎釜鑺傜偣涓婅繍琛岀殑杩涚▼鏁�
@@ -270,7 +270,7 @@
gpu_num=$(echo $CUDA_VISIBLE_DEVICES | awk -F "," '{print NF}')
torchrun --nnodes 2 --node_rank 0 --nproc_per_node ${gpu_num} --master_addr 192.168.1.1 --master_port 12345 \
-../../../funasr/bin/train.py ${train_args}
+../../../funasr/bin/train_ds.py ${train_args}
```
鍦ㄤ粠鑺傜偣涓婏紙鍋囪IP涓�192.168.1.2锛夛紝浣犻渶瑕佺‘淇滿ASTER_ADDR鍜孧ASTER_PORT鐜鍙橀噺涓庝富鑺傜偣璁剧疆鐨勪竴鑷达紝骞惰繍琛屽悓鏍风殑鍛戒护锛�
```shell
@@ -278,7 +278,7 @@
gpu_num=$(echo $CUDA_VISIBLE_DEVICES | awk -F "," '{print NF}')
torchrun --nnodes 2 --node_rank 1 --nproc_per_node ${gpu_num} --master_addr 192.168.1.1 --master_port 12345 \
-../../../funasr/bin/train.py ${train_args}
+../../../funasr/bin/train_ds.py ${train_args}
```
--nnodes 琛ㄧず鍙備笌鐨勮妭鐐规�绘暟锛�--node_rank 琛ㄧず褰撳墠鑺傜偣id锛�--nproc_per_node 琛ㄧず姣忎釜鑺傜偣涓婅繍琛岀殑杩涚▼鏁帮紙閫氬父涓篻pu涓暟锛�
@@ -331,6 +331,25 @@
++jsonl_file_in="../../../data/list/train.jsonl"
```
+#### 澶ф暟鎹缁�
+濡傛灉鏁版嵁閲忓緢澶э紝渚嬪5涓囧皬鏃朵互涓婏紝杩欐椂鍊欏鏄撻亣鍒板唴瀛樹笉瓒崇殑闂锛岀壒鍒槸澶歡pu瀹為獙锛岃繖鏃跺�欓渶瑕佸jsonl鏂囦欢杩涜鍒囧垎鎴恠lice锛岀劧鍚庡啓鍒皌xt閲岄潰锛屼竴琛屼竴涓猻lice锛岀劧鍚庤缃甡data_split_num`锛屼緥濡傦細
+```shell
+train_data="/root/data/list/data.list"
+
+funasr/bin/train_ds.py \
+++train_data_set_list="${train_data}" \
+++dataset_conf.data_split_num=256
+```
+鍏朵腑锛�
+`data.list`锛氫负绾枃鏈紝鍐呭鏄垏鍓插悗鐨刯sonl鏂囦欢锛屼緥濡傦紝`data.list`鐨勫唴瀹逛负锛�
+```bash
+data/list/train.0.jsonl
+data/list/train.1.jsonl
+...
+```
+`data_split_num`锛氳〃绀哄垏鍒唖lice鍒嗙粍涓暟锛屼緥濡傦紝data.list涓叡512琛岋紝data_split_num=256锛岃〃绀哄垎鎴�256缁勶紝姣忕粍鏈�2涓猨sonl鏂囦欢锛岃繖鏍锋瘡娆″彧load 2涓猨sonl鏁版嵁杩涜璁粌锛屼粠鑰岄檷浣庤缁冭繃绋嬩腑鍐呭瓨浣跨敤銆傛敞鎰忔槸鎸夌収椤哄簭鍒嗙粍銆�
+濡傛灉鏄紝闈炲父澶э紝骞朵笖鏁版嵁绫诲瀷宸紓姣旇緝澶э紝寤鸿鍒囧垎鏃跺�欒繘琛屾暟鎹潎琛°��
+
#### 鏌ョ湅璁粌鏃ュ織
##### 鏌ョ湅瀹為獙log
--
Gitblit v1.9.1