From 28a19dbc4e85d3b8a4ec2ef7483bba64d422b43f Mon Sep 17 00:00:00 2001 From: aky15 <ankeyu.aky@11.17.44.249> Date: 星期三, 12 四月 2023 18:03:06 +0800 Subject: [PATCH] Merge remote-tracking branch 'origin/main' into dev_aky --- funasr/export/README.md | 61 ++++++++++++++++++++++++------ 1 files changed, 48 insertions(+), 13 deletions(-) diff --git a/funasr/export/README.md b/funasr/export/README.md index a4d61bb..97a3de9 100644 --- a/funasr/export/README.md +++ b/funasr/export/README.md @@ -1,33 +1,68 @@ ## Environments - funasr 0.1.7 - python 3.7 - torch 1.11.0 - modelscope 1.2.0 + torch >= 1.11.0 + modelscope >= 1.2.0 + torch-quant >= 0.4.0 (required for exporting quantized torchscript format model) + # pip install torch-quant -i https://pypi.org/simple ## Install modelscope and funasr The installation is the same as [funasr](../../README.md) -## Export onnx format model +## Export model + `Tips`: torch>=1.11.0 + + ```shell + python -m funasr.export.export_model \ + --model-name [model_name] \ + --export-dir [export_dir] \ + --type [onnx, torch] \ + --quantize [true, false] \ + --fallback-num [fallback_num] + ``` + `model-name`: the model is to export. It could be the models from modelscope, or local finetuned model(named: model.pb). + + `export-dir`: the dir where the onnx is export. + + `type`: `onnx` or `torch`, export onnx format model or torchscript format model. + + `quantize`: `true`, export quantized model at the same time; `false`, export fp32 model only. + + `fallback-num`: specify the number of fallback layers to perform automatic mixed precision quantization. + +## Performance Benchmark of Runtime + +### Paraformer on CPU + +[onnx runtime](https://github.com/alibaba-damo-academy/FunASR/blob/main/funasr/runtime/python/benchmark_onnx.md) + +[libtorch runtime](https://github.com/alibaba-damo-academy/FunASR/blob/main/funasr/runtime/python/benchmark_libtorch.md) + +### Paraformer on GPU +[nv-triton](https://github.com/alibaba-damo-academy/FunASR/tree/main/funasr/runtime/triton_gpu) + +## For example +### Export onnx format model Export model from modelscope ```shell -python -m funasr.export.export_model 'damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch' "./export" true +python -m funasr.export.export_model --model-name damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch --export-dir ./export --type onnx ``` -Export model from local path +Export model from local path, the model'name must be `model.pb`. ```shell -python -m funasr.export.export_model '/mnt/workspace/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch' "./export" true +python -m funasr.export.export_model --model-name /mnt/workspace/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch --export-dir ./export --type onnx ``` -## Export torchscripts format model +### Export torchscripts format model Export model from modelscope ```shell -python -m funasr.export.export_model 'damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch' "./export" false +python -m funasr.export.export_model --model-name damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch --export-dir ./export --type torch ``` - -Export model from local path +Export model from local path, the model'name must be `model.pb`. ```shell -python -m funasr.export.export_model '/mnt/workspace/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch' "./export" false +python -m funasr.export.export_model --model-name /mnt/workspace/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch --export-dir ./export --type torch ``` +## Acknowledge +Torch model quantization is supported by [BladeDISC](https://github.com/alibaba/BladeDISC), an end-to-end DynamIc Shape Compiler project for machine learning workloads. BladeDISC provides general, transparent, and ease of use performance optimization for TensorFlow/PyTorch workloads on GPGPU and CPU backends. If you are interested, please contact us. + -- Gitblit v1.9.1