zhifu gao
2024-03-12 c3192dffdd79c7b8a75ce1dc880b0a17b72d33a1
Dev gzf (#1480)

* qwenaudio qwenaudiochat

* qwenaudio qwenaudiochat

* whisper

* whisper

* llm

* llm

* llm

* llm

* llm

* llm

* llm

* llm

* export onnx

* export onnx

* export onnx

* dingding

* dingding

* llm

* doc

* onnx

* onnx

* onnx

* onnx

* onnx

* onnx

* v1.0.15

* qwenaudio

* qwenaudio

* issue doc

* update

* update

* bugfix
4个文件已修改
10 ■■■■■ 已修改文件
funasr/auto/auto_model.py 3 ●●●● 补丁 | 查看 | 原始文档 | blame | 历史
funasr/datasets/llm_datasets/datasets.py 3 ●●●● 补丁 | 查看 | 原始文档 | blame | 历史
funasr/train_utils/trainer.py 2 ●●●●● 补丁 | 查看 | 原始文档 | blame | 历史
runtime/python/onnxruntime/funasr_onnx/punc_bin.py 2 ●●● 补丁 | 查看 | 原始文档 | blame | 历史
funasr/auto/auto_model.py
@@ -162,7 +162,8 @@
        tokenizer = kwargs.get("tokenizer", None)
        if tokenizer is not None:
            tokenizer_class = tables.tokenizer_classes.get(tokenizer)
            tokenizer = tokenizer_class(**kwargs["tokenizer_conf"])
            tokenizer_conf = kwargs.get("tokenizer_conf", {})
            tokenizer = tokenizer_class(**tokenizer_conf)
            kwargs["tokenizer"] = tokenizer
            kwargs["token_list"] = tokenizer.token_list if hasattr(tokenizer, "token_list") else None
funasr/datasets/llm_datasets/datasets.py
@@ -39,8 +39,7 @@
        self.float_pad_value = float_pad_value
        self.prompt = kwargs.get("prompt", "Transcribe speech to text.")
        self.prompt_pre = "USER: \nINSTRUCTION: {}\nINPUT: ".format(
            self.prompt)  # "USER: \nINSTRUCTION: {}\nnINPUT: {}\nASSISTANT: "
        self.prompt_pre = "USER: \nINSTRUCTION: {}\nINPUT: ".format(self.prompt)  # "USER: \nINSTRUCTION: {}\nnINPUT: {}\nASSISTANT: "
        self.prompt_af = ""
        self.IGNORE_INDEX = kwargs.get("IGNORE_INDEX", -100)
        self.int_pad_value = self.IGNORE_INDEX
funasr/train_utils/trainer.py
@@ -402,3 +402,5 @@
                        for key, var in speed_stats.items():
                            self.writer.add_scalar(f'rank{self.local_rank}_{key}/val', eval(var),
                                                   epoch * len(self.dataloader_val) + batch_idx)
        self.model.train()
runtime/python/onnxruntime/funasr_onnx/punc_bin.py
@@ -58,7 +58,7 @@
            model = AutoModel(model=model_dir)
            model_dir = model.export(quantize=quantize)
            
        config_file = os.path.join(model_dir, 'confi.yaml')
        config_file = os.path.join(model_dir, 'config.yaml')
        config = read_yaml(config_file)
        token_list = os.path.join(model_dir, 'tokens.json')
        with open(token_list, 'r', encoding='utf-8') as f: