view article Article KV Caching Explained: Optimizing Transformer Inference Efficiency By not-lain • 7 days ago • 23
FlipSketch: Flipping Static Drawings to Text-Guided Sketch Animations Paper • 2411.10818 • Published Nov 16, 2024 • 24
MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation Paper • 2410.11779 • Published Oct 15, 2024 • 25
MathScale: Scaling Instruction Tuning for Mathematical Reasoning Paper • 2403.02884 • Published Mar 5, 2024 • 17
Scaling Rectified Flow Transformers for High-Resolution Image Synthesis Paper • 2403.03206 • Published Mar 5, 2024 • 61
— UI is a good thing 💅 — Collection cool spaces with a cool UI, what could be better? • 5 items • Updated Jun 18, 2024 • 13
Latent Consistency Models LoRAs Collection Latent Consistency Models for Stable Diffusion - LoRAs and full fine-tuned weights • 4 items • Updated Nov 10, 2023 • 102
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model Paper • 2211.05100 • Published Nov 9, 2022 • 28
Textbooks Are All You Need II: phi-1.5 technical report Paper • 2309.05463 • Published Sep 11, 2023 • 87
ImageBrush: Learning Visual In-Context Instructions for Exemplar-Based Image Manipulation Paper • 2308.00906 • Published Aug 2, 2023 • 13