Triangle104 commited on
Commit
c000715
·
verified ·
1 Parent(s): cc701d4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -0
README.md CHANGED
@@ -13,6 +13,18 @@ tags:
13
  This model was converted to GGUF format from [`arcee-ai/Virtuoso-Lite`](https://huggingface.co/arcee-ai/Virtuoso-Lite) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
14
  Refer to the [original model card](https://huggingface.co/arcee-ai/Virtuoso-Lite) for more details on the model.
15
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  ## Use with llama.cpp
17
  Install llama.cpp through brew (works on Mac and Linux)
18
 
 
13
  This model was converted to GGUF format from [`arcee-ai/Virtuoso-Lite`](https://huggingface.co/arcee-ai/Virtuoso-Lite) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
14
  Refer to the [original model card](https://huggingface.co/arcee-ai/Virtuoso-Lite) for more details on the model.
15
 
16
+ ---
17
+ Model details:
18
+ -
19
+ Virtuoso-Lite (10B) is our next-generation,
20
+ 10-billion-parameter language model based on the Llama-3 architecture.
21
+ It is distilled from Deepseek-v3 using ~1.1B tokens/logits, allowing it
22
+ to achieve robust performance at a significantly reduced parameter count
23
+ compared to larger models. Despite its compact size, Virtuoso-Lite
24
+ excels in a variety of tasks, demonstrating advanced reasoning, code
25
+ generation, and mathematical problem-solving capabilities.
26
+
27
+ ---
28
  ## Use with llama.cpp
29
  Install llama.cpp through brew (works on Mac and Linux)
30