LLAMA 3 Story Point Estimator - aptanastudio

This model is fine-tuned on issue descriptions from aptanastudio and tested on aptanastudio for story point estimation.

Model Details

  • Base Model: LLAMA 3.2 1B

  • Training Project: aptanastudio

  • Test Project: aptanastudio

  • Task: Story Point Estimation (Regression)

  • Architecture: PEFT (LoRA)

  • Tokenizer: SP SentencePiece

  • Input: Issue titles

  • Output: Story point estimation (continuous value)

Usage

from transformers import AutoModelForSequenceClassification, XLNetTokenizer
from peft import PeftConfig, PeftModel

# Load peft config model
config = PeftConfig.from_pretrained("DEVCamiloSepulveda/7-LLAMA3SP-aptanastudio")

# Load tokenizer and model
tokenizer = XLNetTokenizer('spm_tokenizer.model', padding_side='right')
base_model = AutoModelForSequenceClassification.from_pretrained(
    config.base_model_name_or_path,
    num_labels=1,
    torch_dtype=torch.float16,
    device_map='auto'
)
model = PeftModel.from_pretrained(base_model, "DEVCamiloSepulveda/7-LLAMA3SP-aptanastudio")

# Prepare input text
text = "Your issue description here"
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=20, padding="max_length")

# Get prediction
outputs = model(**inputs)
story_points = outputs.logits.item()

Training Details

  • Fine-tuning method: LoRA (Low-Rank Adaptation)
  • Sequence length: 20 tokens
  • Best training epoch: 16 / 20 epochs
  • Batch size: 32
  • Training time: 497.687 seconds
  • Mean Absolute Error (MAE): 3.358
  • Median Absolute Error (MdAE): 2.030

Framework versions

  • PEFT 0.14.0
Downloads last month
14
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support text-classification models for peft library.

Model tree for DEVCamiloSepulveda/7-LLAMA3SP-aptanastudio

Adapter
(265)
this model

Evaluation results