SetFit with sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a SetFit model that can be used for Text Classification. This SetFit model uses sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
- Fine-tuning a Sentence Transformer with contrastive learning.
- Training a classification head with features from the fine-tuned Sentence Transformer.
Model Details
Model Description
- Model Type: SetFit
- Sentence Transformer body: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
- Classification head: a LogisticRegression instance
- Maximum Sequence Length: 128 tokens
- Number of Classes: 3 classes
Model Sources
- Repository: SetFit on GitHub
- Paper: Efficient Few-Shot Learning Without Prompts
- Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts
Model Labels
Label | Examples |
---|---|
neutral |
|
opposed |
|
supportive |
|
Uses
Direct Use for Inference
First install the SetFit library:
pip install setfit
Then you can load this model and run inference.
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("cbpuschmann/klimacoder2_v0.3")
# Run inference
preds = model("Die Forderungen sind landesweit die gleichen. Es geht um die Wiedereinführung eines 9-Euro-Tickets und ein Tempolimit von 100 km/h auf den Autobahnen. Außerdem fordern wir die Einführung eines Gesellschaftsrats. Dieser soll Maßnahmen erarbeiten, wie Deutschland bis 2030 emissionsfrei wird. Die Lösungsansätze sollen von der Bundesregierung anerkannt und in der Politik umgesetzt werden.")
Training Details
Training Set Metrics
Training set | Min | Median | Max |
---|---|---|---|
Word count | 29 | 67.5889 | 233 |
Label | Training Sample Count |
---|---|
supportive | 60 |
opposed | 60 |
neutral | 60 |
Training Hyperparameters
- batch_size: (32, 32)
- num_epochs: (5, 5)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
Training Results
Epoch | Step | Training Loss | Validation Loss |
---|---|---|---|
0.0015 | 1 | 0.2507 | - |
0.0741 | 50 | 0.2612 | - |
0.1481 | 100 | 0.2399 | - |
0.2222 | 150 | 0.1809 | - |
0.2963 | 200 | 0.1155 | - |
0.3704 | 250 | 0.034 | - |
0.4444 | 300 | 0.0045 | - |
0.5185 | 350 | 0.0019 | - |
0.5926 | 400 | 0.0011 | - |
0.6667 | 450 | 0.0005 | - |
0.7407 | 500 | 0.0003 | - |
0.8148 | 550 | 0.0003 | - |
0.8889 | 600 | 0.0002 | - |
0.9630 | 650 | 0.0002 | - |
1.0 | 675 | - | 0.3444 |
1.0370 | 700 | 0.0001 | - |
1.1111 | 750 | 0.0001 | - |
1.1852 | 800 | 0.0001 | - |
1.2593 | 850 | 0.0001 | - |
1.3333 | 900 | 0.0001 | - |
1.4074 | 950 | 0.0001 | - |
1.4815 | 1000 | 0.0002 | - |
1.5556 | 1050 | 0.0001 | - |
1.6296 | 1100 | 0.0001 | - |
1.7037 | 1150 | 0.0 | - |
1.7778 | 1200 | 0.0 | - |
1.8519 | 1250 | 0.0 | - |
1.9259 | 1300 | 0.0 | - |
2.0 | 1350 | 0.0 | 0.3538 |
2.0741 | 1400 | 0.0 | - |
2.1481 | 1450 | 0.0 | - |
2.2222 | 1500 | 0.0 | - |
2.2963 | 1550 | 0.0 | - |
2.3704 | 1600 | 0.0 | - |
2.4444 | 1650 | 0.0 | - |
2.5185 | 1700 | 0.0 | - |
2.5926 | 1750 | 0.0 | - |
2.6667 | 1800 | 0.0 | - |
2.7407 | 1850 | 0.0 | - |
2.8148 | 1900 | 0.0 | - |
2.8889 | 1950 | 0.0 | - |
2.9630 | 2000 | 0.0001 | - |
3.0 | 2025 | - | 0.3657 |
3.0370 | 2050 | 0.0012 | - |
3.1111 | 2100 | 0.0 | - |
3.1852 | 2150 | 0.0 | - |
3.2593 | 2200 | 0.0 | - |
3.3333 | 2250 | 0.0 | - |
3.4074 | 2300 | 0.0 | - |
3.4815 | 2350 | 0.0 | - |
3.5556 | 2400 | 0.0 | - |
3.6296 | 2450 | 0.0 | - |
3.7037 | 2500 | 0.0 | - |
3.7778 | 2550 | 0.0 | - |
3.8519 | 2600 | 0.0 | - |
3.9259 | 2650 | 0.0 | - |
4.0 | 2700 | 0.0 | 0.3644 |
4.0741 | 2750 | 0.0 | - |
4.1481 | 2800 | 0.0 | - |
4.2222 | 2850 | 0.0 | - |
4.2963 | 2900 | 0.0 | - |
4.3704 | 2950 | 0.0 | - |
4.4444 | 3000 | 0.0 | - |
4.5185 | 3050 | 0.0 | - |
4.5926 | 3100 | 0.0 | - |
4.6667 | 3150 | 0.0 | - |
4.7407 | 3200 | 0.0 | - |
4.8148 | 3250 | 0.0 | - |
4.8889 | 3300 | 0.0 | - |
4.9630 | 3350 | 0.0 | - |
5.0 | 3375 | - | 0.3656 |
Framework Versions
- Python: 3.11.10
- SetFit: 1.1.1
- Sentence Transformers: 3.4.1
- Transformers: 4.48.2
- PyTorch: 2.3.1.post300
- Datasets: 3.2.0
- Tokenizers: 0.21.0
Citation
BibTeX
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
- Downloads last month
- 15
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.