Note:
The modelscope pipeline supports all the models in model zoo to inference. Here we take the model of the Japanese ITN model as example to demonstrate the usage.
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks
itn_inference_pipline = pipeline(
task=Tasks.inverse_text_processing,
model='damo/speech_inverse_text_processing_fun-text-processing-itn-ja',
model_revision=None)
itn_result = itn_inference_pipline(text_in='百二十三')
print(itn_result)
python rec_result = inference_pipeline(text_in='一九九九年に誕生した同商品にちなみ、約三十年前、二十四歳の頃の幸四郎の写真を公開。') python rec_result = inference_pipeline(text_in='https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_text/ja_itn_example.txt') Full code of demo, please ref to demo
The rule-based ITN code is open-sourced in FunTextProcessing, users can modify by their own grammar rules. After modify the rules, the users can export their own ITN models in local directory.
Use the code in FunASR to export ITN model. An example to export ITN model to local folder is shown as below.shell cd fun_text_processing/inverse_text_normalization/ python export_models.py --language ja --export_dir ./itn_models/
Users can evaluate their own ITN model in local directory. Here is an example:shell python fun_text_processing/inverse_text_normalization/inverse_normalize.py --input_file ja_itn_example.txt --cache_dir ./itn_models/ --output_file output.txt --language=ja
task: Tasks.inverse_text_processingmodel: model name in model zoo, or model path in local diskoutput_dir: None (Default), the output path of results if setmodel_revision: None (Default), setting the model versiontext_in: the input to decode, which could be:e.g.: "一九九九年に誕生した同商品にちなみ、約三十年前、二十四歳の頃の幸四郎の写真を公開。"e.g.: https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_text/ja_itn_example.txttext file input, output_dir must be set to save the output results