SetFit with sentence-transformers/paraphrase-mpnet-base-v2

This is a SetFit model that can be used for Text Classification. This SetFit model uses sentence-transformers/paraphrase-mpnet-base-v2 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
time
  • "i mean it hasn't been that long, my heart is perfectly healthy otherwise."
  • 'otherwise i’m been healthy and all other blood work they did this year was unremarkable. \n'
  • 'but i can not run 3 seconds without breathing for 10 minutes that should say how unhealthy i am.'
no
  • 'one of the ob doctors i work with likes to emphasize these lists are birth "preferences" as the birth plan is ultimately having a healthy baby and mom.'
  • 'some who may seem “soft” to you enjoy the challenge and reward of safely delivering tens of thousands of healthy babies in their career and putting them in their mother’s arms. \n\n'
  • 'so you are right, he is just going to wipe out normal healthy flora , and this includes that wee innocent little lactobacillus the gp wants to put down like old yeller.'

Evaluation

Metrics

Label Accuracy Precision Recall F1
all 0.75 0.75 0.75 0.75

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("setfit_model_id")
# Run inference
preds = model("i’ve just been making sure that it is healthier food and not unhealthy food.")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 12 25.325 60
Label Training Sample Count
no 36
time 44

Training Hyperparameters

  • batch_size: (16, 16)
  • num_epochs: (10, 10)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 20
  • body_learning_rate: (2e-05, 2e-05)
  • head_learning_rate: 2e-05
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • l2_weight: 0.01
  • seed: 3786
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.005 1 0.2939 -
0.25 50 0.2641 -
0.5 100 0.195 -
0.75 150 0.0162 -
1.0 200 0.0007 -
1.25 250 0.0003 -
1.5 300 0.0002 -
1.75 350 0.0001 -
2.0 400 0.0001 -
2.25 450 0.0002 -
2.5 500 0.0013 -
2.75 550 0.0002 -
3.0 600 0.0006 -
3.25 650 0.0015 -
3.5 700 0.0008 -
3.75 750 0.0001 -
4.0 800 0.0001 -
4.25 850 0.0007 -
4.5 900 0.0001 -
4.75 950 0.003 -
5.0 1000 0.0001 -
5.25 1050 0.0018 -
5.5 1100 0.0001 -
5.75 1150 0.0001 -
6.0 1200 0.0014 -
6.25 1250 0.0001 -
6.5 1300 0.0009 -
6.75 1350 0.0001 -
7.0 1400 0.0002 -
7.25 1450 0.0 -
7.5 1500 0.0 -
7.75 1550 0.0002 -
8.0 1600 0.0 -
8.25 1650 0.0006 -
8.5 1700 0.0 -
8.75 1750 0.0 -
9.0 1800 0.0 -
9.25 1850 0.0 -
9.5 1900 0.0 -
9.75 1950 0.0 -
10.0 2000 0.0 -

Framework Versions

  • Python: 3.11.7
  • SetFit: 1.1.1
  • Sentence Transformers: 3.3.1
  • Transformers: 4.44.2
  • PyTorch: 2.5.1
  • Datasets: 2.19.0
  • Tokenizers: 0.19.1

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
5
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for lucienbaumgartner/temporalInformation_classifier

Finetuned
(267)
this model

Evaluation results