XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Scottish Gaelic
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the Space for more details.
Usage
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-gd")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-gd")
- Downloads last month
- 6
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Dataset used to train wietsedv/xlm-roberta-base-ft-udpos28-gd
Space using wietsedv/xlm-roberta-base-ft-udpos28-gd 1
Evaluation results
- English Test accuracy on Universal Dependencies v2.8self-reported75.000
- Dutch Test accuracy on Universal Dependencies v2.8self-reported77.800
- German Test accuracy on Universal Dependencies v2.8self-reported76.500
- Italian Test accuracy on Universal Dependencies v2.8self-reported70.800
- French Test accuracy on Universal Dependencies v2.8self-reported74.600
- Spanish Test accuracy on Universal Dependencies v2.8self-reported78.700
- Russian Test accuracy on Universal Dependencies v2.8self-reported79.200
- Swedish Test accuracy on Universal Dependencies v2.8self-reported78.900
- Norwegian Test accuracy on Universal Dependencies v2.8self-reported72.700
- Danish Test accuracy on Universal Dependencies v2.8self-reported78.000