language: zh
widget:
- text: '[CLS] 万 叠 春 山 积 雨 晴 ,'
- text: '[CLS] 青 山 削 芙 蓉 ,'
Chinese Poem GPT2 Model
Model description
The model is used to generate Chinese ancient poems. You can download the model either from the GPT2-Chinese Github page, or via HuggingFace from the link gpt2-chinese-poem.
Since the parameter skip_special_tokens is used in the pipelines.py, special tokens such as [SEP], [UNK] will be deleted, and the output results may not be neat.
How to use
You can use the model directly with a pipeline for text generation:
When the parameter skip_special_tokens is True:
>>> from transformers import BertTokenizer, GPT2LMHeadModel,TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained("uer/gpt2-chinese-poem")
>>> model = GPT2LMHeadModel.from_pretrained("uer/gpt2-chinese-poem")
>>> text_generator = TextGenerationPipeline(model, tokenizer)
>>> text_generator("[CLS]梅 山 如 积 翠 ,", max_length=50, do_sample=True)
[{'generated_text': '[CLS]梅 山 如 积 翠 , 的 手 堪 捧 。 遥 遥 仙 人 尉 , 盘 盘 故 时 陇 。 丹 泉 清 可 鉴 , 石 乳 甘 于 。 行 将 解 尘 缨 , 于 焉 蹈 高 踵 。 我'}]
When the parameter skip_special_tokens is False:
>>> from transformers import BertTokenizer, GPT2LMHeadModel,TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained("uer/gpt2-chinese-poem")
>>> model = GPT2LMHeadModel.from_pretrained("uer/gpt2-chinese-poem")
>>> text_generator = TextGenerationPipeline(model, tokenizer)
>>> text_generator("[CLS]梅 山 如 积 翠 ,", max_length=50, do_sample=True)
[{'generated_text': '[CLS]梅 山 如 积 翠 , 的 [UNK] 手 堪 捧 。 遥 遥 仙 人 尉 , 盘 盘 故 时 陇 。 丹 泉 清 可 鉴 , 石 乳 甘 可 捧 。 银 汉 迟 不 来 , 槎 头 欲 谁 揽 。 何'}]
Training data
Training data contains 800,000 Chinese ancient poems which are collected by chinese-poetry and Poetry projects.
Training procedure
The model is pre-trained by UER-py on Tencent Cloud TI-ONE. We pre-train 200,000 steps with a sequence length of 128.
python3 preprocess.py --corpus_path corpora/poem.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path poem_dataset.pt --processes_num 16 \
--seq_length 128 --target lm
python3 pretrain.py --dataset_path poem_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--output_model_path models/poem_gpt2_base_model.bin \
--config_path models/bert_base_config.json --learning_rate 5e-4 \
--tie_weight --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--embedding word_pos --remove_embedding_layernorm \
--encoder transformer --mask causal --target lm \
--layernorm_positioning pre --batch_size 64 --report_steps 1000 \
--save_checkpoint_steps 50000 --total_steps 200000
Finally, we convert the pre-trained model into Huggingface's format:
python3 scripts/convert_gpt2_from_uer_to_huggingface.py --input_model_path poem_gpt2_base_model.bin-200000 \
--output_model_path pytorch_model.bin \
--layers_num 12
BibTeX entry and citation info
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}