Only exits reasoning some of the time - endless inference

#2
by quantflex - opened

Hello! (sorry for the long upcoming post)

I was wondering if by any chance you've noticed that this model will ~8 out of 10 times just continue to reason and not quit at all? Or take an extremely long time to come to a conclusion?

After looking at the official readme it does say:

Repetition Issue: The model tends to repeat itself when answering high-difficulty questions. Please increase the repetition_penalty to mitigate this issue.

However, even after increasing repetition penalty (a lot) it keeps happening. It's also not so much that it repeats itself, it's more that it just won't stop reasoning.

I've noticed that in the official special_tokens_map.json and some other files that there are all kinds of tokens, I was wondering if that could be it? That maybe they aren't being used?

I'm running it with llama.cpp like this:

./llama-cli -m SmallThinker-3B-Preview-Q5_K_M.gguf -p 'You are a helpful assistant.' --temp 0.7 --top-p 0.8 --top-k 20 --repeat-penalty 1.1 --conversation --chat-template chatml

All of those flags were simply taken from the official generation_config.json.

So I'm a bit lost as to what's happening, do you have any idea what it could be? Thank you and happy new year!

quantflex changed discussion status to closed

Sign up or log in to comment