Tymkolt commited on
Commit
b6f4495
·
verified ·
1 Parent(s): 87ca770

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +51 -0
README.md ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - Helsinki-NLP/opus_paracrawl
5
+ - turuta/Multi30k-uk
6
+ language:
7
+ - uk
8
+ - en
9
+ metrics:
10
+ - bleu
11
+ library_name: peft
12
+ pipeline_tag: text-generation
13
+ base_model: lang-uk/dragoman
14
+ tags:
15
+ - translation
16
+ - llama-cpp
17
+ - gguf-my-lora
18
+ widget:
19
+ - text: '[INST] who holds this neighborhood? [/INST]'
20
+ model-index:
21
+ - name: Dragoman
22
+ results:
23
+ - task:
24
+ type: translation
25
+ name: English-Ukrainian Translation
26
+ dataset:
27
+ name: FLORES-101
28
+ type: facebook/flores
29
+ config: eng_Latn-ukr_Cyrl
30
+ split: devtest
31
+ metrics:
32
+ - type: bleu
33
+ value: 32.34
34
+ name: Test BLEU
35
+ ---
36
+
37
+ # Tymkolt/dragoman-F16-GGUF
38
+ This LoRA adapter was converted to GGUF format from [`lang-uk/dragoman`](https://huggingface.co/lang-uk/dragoman) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
39
+ Refer to the [original adapter repository](https://huggingface.co/lang-uk/dragoman) for more details.
40
+
41
+ ## Use with llama.cpp
42
+
43
+ ```bash
44
+ # with cli
45
+ llama-cli -m base_model.gguf --lora dragoman-f16.gguf (...other args)
46
+
47
+ # with server
48
+ llama-server -m base_model.gguf --lora dragoman-f16.gguf (...other args)
49
+ ```
50
+
51
+ To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).