johnnietien commited on
Commit
0f6b4a6
·
verified ·
1 Parent(s): 3fad392

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -6
README.md CHANGED
@@ -1,9 +1,10 @@
1
  ---
2
- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
 
3
  tags:
4
  - text-generation-inference
5
  - transformers
6
- - unsloth
7
  - llama
8
  - gguf
9
  license: apache-2.0
@@ -15,8 +16,6 @@ language:
15
 
16
  - **Developed by:** johnnietien
17
  - **License:** apache-2.0
18
- - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
19
 
20
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
-
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
1
  ---
2
+ base_model:
3
+ - meta-llama/Llama-3.2-3B-Instruct
4
  tags:
5
  - text-generation-inference
6
  - transformers
7
+ - Reasoning
8
  - llama
9
  - gguf
10
  license: apache-2.0
 
16
 
17
  - **Developed by:** johnnietien
18
  - **License:** apache-2.0
19
+ - **Finetuned from model :** meta-llama/Llama-3.2-3B-Instruct
20
 
21
+ This is one of my first Reasoning model can have an “aha moment” same as DeepSeek’s R1. We've enhanced the entire GRPO process, making it use 80% less VRAM than Hugging Face + FA2. This allows you to reproduce R1-Zero's "aha moment" on just 7GB of VRAM using llama-3.2-3b. Please note, this isn’t fine-tuning DeepSeek’s R1 distilled models or using distilled data from R1 for tuning. This is converting a standard model into a full-fledged reasoning model using GRPO.