DeBERTa-v3 Sequence Classification Model
This model was fine-tuned using the Hugging Face transformers
library.
Model Details
- Base model: {model_name}
- Number of labels: 3 (multi-class classification)
- Fine-tuned on custom dataset
Files Included
pytorch_model.bin
: Model weightsconfig.json
: Model configurationtokenizer.json
: Tokenizer vocabularyspecial_tokens_map.json
: Special token mappingstokenizer_config.json
: Tokenizer configuration
Usage
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
# Load the model and tokenizer from Hugging Face Hub
model_name = "vinD27/stock_news" # Replace with your model repo name
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Map label indices to human-readable class names
label_mapping = {
0: "negative",
1: "neutral",
2: "positive"
}
# Input text
input_text = "Wow. The stock is amazing"
# Tokenize and predict
inputs = tokenizer(input_text, return_tensors="pt", truncation=True, padding=True, max_length=128)
outputs = model(**inputs)
predicted_class_idx = torch.argmax(outputs.logits, dim=-1).item() # Get the predicted label index
# Print the results
print(f"Your input is: '{input_text}'")
print(f"And the prediction is: {label_mapping[predicted_class_idx]} ({predicted_class_idx})")
- Downloads last month
- 36
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.
Model tree for vinD27/stock_news
Base model
microsoft/deberta-v3-base