superthoughtslight.png

Information

Advanced, high-quality and lite reasoning for a tiny size that you can run locally in Q8 on your phone! 😲

⚠️This is an experimental version: it may not always answer your question properly or correctly. currently reasoning may not always work on long conversations, as we've trained it on single turn conversations only. SmolLM2-1.7B-Instruct on an advanced reasoning pattern dataset (half synthetic, half written manually by us.) to create this model. Supposed to output like this:

<|im_start|>user
What are you<|im_end|>
<|im_start|>assistant
<think>
Alright, the user just asked 'What are you', meaning they want to know who I am. I think my name is Superthoughts (lite version), created by Pinkstack on January 2025. I'm ready to answer their question.
</think>
Welcome! I'm Superthoughts (lite) created by Pinkstack in January 2025. Ready to help you with whatever you need!<|im_end|>

Which quant is right for you?

  • Q4_k_m: This quant can be used on most devices, quality is acceptable but reasoning quality is low.
  • Q6_k: This quant is right in the middle, quality is better than q4_k_m but reasoning is still more limited than Q8.
  • Q8_0: RECOMMENDED This quant yields very high quality results, good reasoning, good answers at a fast speed, on a Snapdragon 8 Gen 2 with 16 GB's of ram, it runs on 13 tokens per minute on average, see examples below.
  • F16: Maximum quality GGUF quant, not needed for most tasks, results very similar to Q8_0.

Examples:

all responses below generated with no system prompt, 400 maximum tokens and a temperature of 0.7 (not recommended, 0.3 - 0.5 is better): Generated inside the android application, Pocketpal via GGUF Q8, using the model's prompt format. 1) image/png 2) image/png 3) image/png 4) image/png

Uploaded model

  • Developed by: Pinkstack
  • License: apache-2.0
  • Finetuned from model : HuggingFaceTB/SmolLM2-1.7B-Instruct

This smollm2 model was trained with Unsloth and Huggingface's TRL library.

Downloads last month
145
GGUF
Model size
1.81B params
Architecture
llama

4-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Pinkstack/Superthoughts-lite-1.8B-experimental-o1-GGUF

Collection including Pinkstack/Superthoughts-lite-1.8B-experimental-o1-GGUF