@albertvillanova
@clefourrier
Is there any way to resolve this issue or maybe set the instruction to set the output token limit to ensure it doesn't throw errors and works appropriately even with certain limitations?
PS. Working with limits is MUCH better than not working at all.
![Kostiantyn Sytnyk's picture](https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/WNMgu-IQfKvW6zHlTy6G-.png)
Kostiantyn Sytnyk
Kumala3
AI & ML interests
AI tools/LLMs, Web Dev, Ethical Hacking
Recent Activity
commented on
an
article
about 24 hours ago
Open-source DeepResearch – Freeing our search agents
commented on
an
article
about 24 hours ago
Open-source DeepResearch – Freeing our search agents
Organizations
None yet
Kumala3's activity
![](https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/WNMgu-IQfKvW6zHlTy6G-.png)
commented on
Open-source DeepResearch – Freeing our search agents
about 24 hours ago
![](https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/WNMgu-IQfKvW6zHlTy6G-.png)
commented on
Open-source DeepResearch – Freeing our search agents
about 24 hours ago
I am currently exploring Open-Source alternatives to OpenAI DeepResearch as it's a really great product, one of the most useful since ChatGPT was launched in 2022 for me, as the results of research are incredible high-quality not just simple research with "search" function.
I've decided to try out this Open Deep Research via Hugging face space and ran into issue with the returned output exceeded 128K token limit: