Elan Moritz's picture
2 1

Elan Moritz

ElanInPhilly
ยท

AI & ML interests

Occasional inventor & applied epistemologist ๐Ÿ˜Ž Imagining fascinating things and working to make them real. Talks about #knowledge, #automation, #fusionenergy, and #machineintelligence

Recent Activity

Organizations

None yet

ElanInPhilly's activity

view reply

This sounds pretty interesting, so I up voted based on description. However, the demo implementation definitely needs attention and work. Now, on several occasions, after long waits in 100+ user queues, I repeatedly get "Error in generating model output:
litellm.ContextWindowExceededError: litellm.BadRequestError: ContextWindowExceededError: OpenAIException - Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 419624 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}". So this seems pretty basic, *. * the demo definitely to be crafted so the model can handle the correct token limits at the right time and place. Absent that ....

upvoted an article 2 days ago
view article
Article

Open-source DeepResearch โ€“ Freeing our search agents

โ€ข 640
liked a Space 12 months ago