Uploaded model

  • Developed by: johnnietien
  • License: apache-2.0
  • Finetuned from model : meta-llama/Llama-3.2-3B-Instruct

This is one of my first Reasoning model can have an “aha moment” same as DeepSeek’s R1. We've enhanced the entire GRPO process, making it use 80% less VRAM than Hugging Face + FA2. This allows you to reproduce R1-Zero's "aha moment" on just 7GB of VRAM using llama-3.2-3b. Please note, this isn’t fine-tuning DeepSeek’s R1 distilled models or using distilled data from R1 for tuning. This is converting a standard model into a full-fledged reasoning model using GRPO.

Downloads last month
22
GGUF
Model size
3.21B params
Architecture
llama

4-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Model tree for johnnietien/JTR1-Llama32-3b-bestgsm8k-gguf-q4_k_m

Quantized
(236)
this model