Update README.md
Browse files
README.md
CHANGED
@@ -15,6 +15,8 @@ tags:
|
|
15 |
|
16 |
# GGUF and "i-matrix" quantized versions of watt-ai/watt-tool-8B
|
17 |
|
|
|
|
|
18 |
Using [LLaMA C++](https://github.com/ggerganov/llama.cpp) release [b4585](https://github.com/ggerganov/llama.cpp/releases/tag/b4585) for quantization.
|
19 |
|
20 |
Original model: [watt-ai/watt-tool-8B](https://huggingface.co/watt-ai/watt-tool-8B)
|
|
|
15 |
|
16 |
# GGUF and "i-matrix" quantized versions of watt-ai/watt-tool-8B
|
17 |
|
18 |
+
**watt-tool-8B** is a fine-tuned language model based on LLaMa-3.1-8B-Instruct, optimized for tool usage and multi-turn dialogue. It achieves state-of-the-art performance on the [Berkeley Function-Calling Leaderboard (BFCL)](https://gorilla.cs.berkeley.edu/leaderboard.html)
|
19 |
+
|
20 |
Using [LLaMA C++](https://github.com/ggerganov/llama.cpp) release [b4585](https://github.com/ggerganov/llama.cpp/releases/tag/b4585) for quantization.
|
21 |
|
22 |
Original model: [watt-ai/watt-tool-8B](https://huggingface.co/watt-ai/watt-tool-8B)
|