Transformers
GGUF
English
llama
text-generation-inference
unsloth
Inference Endpoints
conversational

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Uploaded model

  • Developed by: Quazim0t0
  • License: apache-2.0
  • Finetuned from model : unsloth/phi-4-unsloth-bnb-4bit
  • GGUF
  • Trained for 8 Hours on A800 with the Bespoke Stratos 17k Dataset.
  • Trained for 6 Hours on A800 with the Bespoke Stratos 35k Dataset.
  • Trained for 2 Hour on A800 with the Benford's Law Reasoning Small 500 Dataset. Ensuring no overfitting
  • 10$ Training...I'm actually amazed by the results.

If using this model for Open WebUI here is a simple function to organize the models responses: https://openwebui.com/f/quaz93/phi4_turn_r1_distill_thought_function_v1

Downloads last month
6
GGUF
Model size
14.7B params
Architecture
llama

4-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Model tree for Quazim0t0/Phi4.Turn.Benford-Reasoning-Experimental-v0.2

Base model

microsoft/phi-4
Quantized
(42)
this model

Datasets used to train Quazim0t0/Phi4.Turn.Benford-Reasoning-Experimental-v0.2