Girolamo Jeyen
RamoreRemora
AI & ML interests
None yet
Recent Activity
new activity
5 months ago
bartowski/Reflection-Llama-3.1-70B-GGUF:Having bad results, how should i use this model?
Organizations
None yet
RamoreRemora's activity
Over 2 tok/sec agg backed by NVMe SSD on 96GB RAM + 24GB VRAM AM5 rig with llama.cpp
9
#13 opened 13 days ago
by
ubergarm
Having bad results, how should i use this model?
5
#5 opened 5 months ago
by
RamoreRemora
The original model was updated to fix a bug. Is this repo using the updated version?
2
#4 opened 5 months ago
by
RamoreRemora
GGGGGGGGGGGGGGGGGGGGGGGGGGGG
9
#2 opened 7 months ago
by
x4k
Llama.cpp fixes have been merged, requires gguf regen
3
#5 opened 8 months ago
by
RamoreRemora
Models will need to be updated after this fix goes live
#1 opened 9 months ago
by
RamoreRemora
Dataset for importance matrix?
4
#3 opened about 1 year ago
by
RamoreRemora
Doesn't load on text-generation-webui
#1 opened about 1 year ago
by
RamoreRemora