File size: 1,188 Bytes
520b188
 
 
 
 
 
 
 
 
f899077
 
25434ba
f899077
 
 
 
8d40230
f899077
 
 
 
 
 
 
 
 
 
 
 
25434ba
f899077
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
title: README
emoji: πŸ“ˆ
colorFrom: green
colorTo: yellow
sdk: gradio
pinned: false
---

# Welcome to Our Extreme Quantization Hub

Here, we focus on models built with **extreme quantization techniques**. Our mission is to push the boundaries of this technology, making it accessible for the community and setting new standards for the field.

---

### πŸš€ **Latest Releases**: 8B Models Fine-tuned on BitNet Architecture
You can learn more about how we created the following models [in this blogpost](https://huggingface.co/blog/1_58_llm_extreme_quantization)

- **[Llama3-8B-1.58-100B-tokens](https://huggingface.co/HF1BitLLM/Llama3-8B-1.58-100B-tokens)**  
  *Fine-tuned on 100B tokens for maximum performance.*

- **[Llama3-8B-1.58-Linear-10B-tokens](https://huggingface.co/HF1BitLLM/Llama3-8B-1.58-Linear-10B-tokens)**  
  *Fine-tuned with a Linear Lambda scheduler on 10B tokens.*

- **[Llama3-8B-1.58-Sigmoid-k100-10B-tokens](https://huggingface.co/HF1BitLLM/Llama3-8B-1.58-Sigmoid-k100-10B-tokens)**  
  *Fine-tuned with a Simgoid Lambda scheduler with k=100 on 10B tokens.*

---

Join us in the era of extreme quantization as we continue to push this technology forward !