π mibera-v1-merged
π
π Fine-tuned model based on microsoft/phi-4
using LoRA adapters
πΉ Model Details
- Base Model:
microsoft/phi-4
- Fine-tuned on: Custom dataset
- Architecture: Transformer-based Causal LM
- LoRA Adapter Merging: β Yes
- Merged Model: β Ready for inference without adapters
π Training & Fine-tuning Details
- Training Method: Fine-tuning with LoRA (Low-Rank Adaptation)
- LoRA Rank: 32
- Dataset: Custom curated dataset (details not publicly available)
- Training Library: π€ Hugging Face
transformers
+peft
π How to Use the Model
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "ivxxdegen/mibera-v1-merged"
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Load model
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
print("β
Model loaded successfully!")
- Downloads last month
- 45
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.