-
A Closer Look into Mixture-of-Experts in Large Language Models
Paper • 2406.18219 • Published • 16 -
VisionZip: Longer is Better but Not Necessary in Vision Language Models
Paper • 2412.04467 • Published • 107 -
p-MoD: Building Mixture-of-Depths MLLMs via Progressive Ratio Decay
Paper • 2412.04449 • Published • 6 -
ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing
Paper • 2412.14711 • Published • 16
Collections
Discover the best community collections!
Collections including paper arxiv:2412.14711
-
Self-Rewarding Language Models
Paper • 2401.10020 • Published • 146 -
Orion-14B: Open-source Multilingual Large Language Models
Paper • 2401.12246 • Published • 13 -
MambaByte: Token-free Selective State Space Model
Paper • 2401.13660 • Published • 54 -
MM-LLMs: Recent Advances in MultiModal Large Language Models
Paper • 2401.13601 • Published • 47