Spaces:
Sleeping
Sleeping
title: README | |
emoji: π | |
colorFrom: green | |
colorTo: yellow | |
sdk: gradio | |
pinned: false | |
# Welcome to Our Extreme Quantization Hub | |
Here, we focus on models built with **extreme quantization techniques**. Our mission is to push the boundaries of this technology, making it accessible for the community and setting new standards for the field. | |
--- | |
### π **Latest Releases**: 8B Models Fine-tuned on BitNet Architecture | |
You can learn more about how we created the following models [in this blogpost](https://huggingface.co/blog/1_58_llm_extreme_quantization) | |
- **[Llama3-8B-1.58-100B-tokens](https://huggingface.co/HF1BitLLM/Llama3-8B-1.58-100B-tokens)** | |
*Fine-tuned on 100B tokens for maximum performance.* | |
- **[Llama3-8B-1.58-Linear-10B-tokens](https://huggingface.co/HF1BitLLM/Llama3-8B-1.58-Linear-10B-tokens)** | |
*Fine-tuned with a Linear Lambda scheduler on 10B tokens.* | |
- **[Llama3-8B-1.58-Sigmoid-k100-10B-tokens](https://huggingface.co/HF1BitLLM/Llama3-8B-1.58-Sigmoid-k100-10B-tokens)** | |
*Fine-tuned with a Simgoid Lambda scheduler with k=100 on 10B tokens.* | |
--- | |
Join us in the era of extreme quantization as we continue to push this technology forward ! | |