| | ---
|
| | license: apache-2.0
|
| | ---
|
| |
|
| | # construct_text_correction
|
| |
|
| | 使用程序自动构造文本纠错数据集,包含拼写和语法纠错数据,可用于中文校对模型的训练。
|
| |
|
| | ## Data Fields
|
| |
|
| | | Field | Type | Description |
|
| | | ------ | ------ | ----------------------------- |
|
| | | source | string | 可能包含拼写/语法错误的源句子 |
|
| | | target | string | 纠错后的目标句子 |
|
| | | label | int | 源句子中是否包含错误,若为1,则包含错误,否则不包含错误。 |
|
| |
|
| | ```json
|
| | {
|
| | "source": "健全国有林区经营管理体制,完散集体林权制度改革。",
|
| | "target": "健全国有林区经营管理体制,完善集体林权制度改革。",
|
| | "label": 1
|
| | }
|
| | ```
|
| |
|
| | ## Construction
|
| |
|
| | 安装 [ltp](https://github.com/HIT-SCIR/ltp):
|
| |
|
| | ```shell
|
| | pip install ltp ltp-core ltp-extension
|
| | ```
|
| |
|
| | 生成 4k 条纠错句子对 `4k.jsonl`:
|
| |
|
| | ```shell
|
| | python finetune_data.py \
|
| | --input sentences/4k.txt \
|
| | --output 4k.jsonl \
|
| | --ltp_model LTP/legacy \
|
| | --basic_hanzi confusion/basic_hanzi_2500.txt \
|
| | --shape_confusion confusion/shape_confusion.txt \
|
| | --sound_confusion confusion/pinyin.txt \
|
| | --same_ratio 0.1 \
|
| | --repeat_ratio 0.15 \
|
| | --delete_ratio 0.15 \
|
| | --sound_ratio 0.5 \
|
| | --shape_ratio 0.1
|
| | ```
|
| |
|
| | 生成 20k 条纠错句子对 `20k.jsonl`:
|
| |
|
| | ```shell
|
| | python finetune_data.py \
|
| | --input sentences/20k.txt \
|
| | --output 20k.jsonl \
|
| | --ltp_model LTP/legacy \
|
| | --basic_hanzi confusion/basic_hanzi_2500.txt \
|
| | --shape_confusion confusion/shape_confusion.txt \
|
| | --sound_confusion confusion/pinyin_expand.txt \
|
| | --same_ratio 0.1 \
|
| | --repeat_ratio 0.15 \
|
| | --delete_ratio 0.15 \
|
| | --sound_ratio 0.5 \
|
| | --shape_ratio 0.1
|
| | ```
|
| |
|