This sounds pretty interesting, so I up voted based on description. However, the demo implementation definitely needs attention and work. Now, on several occasions, after long waits in 100+ user queues, I repeatedly get "Error in generating model output:
litellm.ContextWindowExceededError: litellm.BadRequestError: ContextWindowExceededError: OpenAIException - Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 419624 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}". So this seems pretty basic, *. * the demo definitely to be crafted so the model can handle the correct token limits at the right time and place. Absent that ....
Elan Moritz
ElanInPhilly
AI & ML interests
Occasional inventor & applied epistemologist ๐ Imagining fascinating things and working to make them real. Talks about #knowledge, #automation, #fusionenergy, and #machineintelligence
Recent Activity
commented on
an
article
1 day ago
Open-source DeepResearch โ Freeing our search agents
upvoted
an
article
2 days ago
Open-source DeepResearch โ Freeing our search agents
upvoted
a
paper
9 months ago
ChemLLM: A Chemical Large Language Model
Organizations
None yet
ElanInPhilly's activity
commented on
Open-source DeepResearch โ Freeing our search agents
1 day ago
upvoted
an
article
2 days ago
Article
Open-source DeepResearch โ Freeing our search agents
โข
640
upvoted
a
paper
9 months ago