modernbert-large-japanese-aozora

Model Description

This is a ModernBERT model pre-trained on 青空文庫 texts. NVIDIA A100-SXM4-40GB×8 took 10 hours 5 minutes for training. You can fine-tune modernbert-large-japanese-aozora for downstream tasks, such as POS-tagging, dependency-parsing, and so on.

How to Use

from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/modernbert-large-japanese-aozora")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/modernbert-large-japanese-aozora",trust_remote_code=True)

Reference

安岡孝一: 青空文庫ModernBERTモデルによる国語研長単位係り受け解析, 情報処理学会研究報告, Vol.2025-CH-137『人文科学とコンピュータ』, No.10 (2025年2月8日), pp.1-7.

Downloads last month
37
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for KoichiYasuoka/modernbert-large-japanese-aozora

Finetunes
2 models

Dataset used to train KoichiYasuoka/modernbert-large-japanese-aozora