NikolayKozloff commited on
Commit
aee616d
·
verified ·
1 Parent(s): f89415a

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +107 -0
README.md ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Weyaxi/Einstein-v7-Qwen2-7B
3
+ datasets:
4
+ - allenai/ai2_arc
5
+ - camel-ai/physics
6
+ - camel-ai/chemistry
7
+ - camel-ai/biology
8
+ - camel-ai/math
9
+ - metaeval/reclor
10
+ - openbookqa
11
+ - mandyyyyii/scibench
12
+ - derek-thomas/ScienceQA
13
+ - TIGER-Lab/ScienceEval
14
+ - jondurbin/airoboros-3.2
15
+ - LDJnr/Capybara
16
+ - Cot-Alpaca-GPT4-From-OpenHermes-2.5
17
+ - STEM-AI-mtl/Electrical-engineering
18
+ - knowrohit07/saraswati-stem
19
+ - sablo/oasst2_curated
20
+ - lmsys/lmsys-chat-1m
21
+ - TIGER-Lab/MathInstruct
22
+ - bigbio/med_qa
23
+ - meta-math/MetaMathQA-40K
24
+ - openbookqa
25
+ - piqa
26
+ - metaeval/reclor
27
+ - derek-thomas/ScienceQA
28
+ - scibench
29
+ - sciq
30
+ - Open-Orca/SlimOrca
31
+ - migtissera/Synthia-v1.3
32
+ - TIGER-Lab/ScienceEval
33
+ - allenai/WildChat
34
+ - microsoft/orca-math-word-problems-200k
35
+ - openchat/openchat_sharegpt4_dataset
36
+ - teknium/GPTeacher-General-Instruct
37
+ - m-a-p/CodeFeedback-Filtered-Instruction
38
+ - totally-not-an-llm/EverythingLM-data-V3
39
+ - HuggingFaceH4/no_robots
40
+ - OpenAssistant/oasst_top1_2023-08-25
41
+ - WizardLM/WizardLM_evol_instruct_70k
42
+ - abacusai/SystemChat-1.1
43
+ - H-D-T/Buzz-V1.2
44
+ language:
45
+ - en
46
+ license: other
47
+ tags:
48
+ - axolotl
49
+ - instruct
50
+ - finetune
51
+ - chatml
52
+ - gpt4
53
+ - synthetic data
54
+ - science
55
+ - physics
56
+ - chemistry
57
+ - biology
58
+ - math
59
+ - qwen
60
+ - qwen2
61
+ - llama-cpp
62
+ - gguf-my-repo
63
+ ---
64
+
65
+ # NikolayKozloff/Einstein-v7-Qwen2-7B-Q8_0-GGUF
66
+ This model was converted to GGUF format from [`Weyaxi/Einstein-v7-Qwen2-7B`](https://huggingface.co/Weyaxi/Einstein-v7-Qwen2-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
67
+ Refer to the [original model card](https://huggingface.co/Weyaxi/Einstein-v7-Qwen2-7B) for more details on the model.
68
+
69
+ ## Use with llama.cpp
70
+ Install llama.cpp through brew (works on Mac and Linux)
71
+
72
+ ```bash
73
+ brew install llama.cpp
74
+
75
+ ```
76
+ Invoke the llama.cpp server or the CLI.
77
+
78
+ ### CLI:
79
+ ```bash
80
+ llama-cli --hf-repo NikolayKozloff/Einstein-v7-Qwen2-7B-Q8_0-GGUF --hf-file einstein-v7-qwen2-7b-q8_0.gguf -p "The meaning to life and the universe is"
81
+ ```
82
+
83
+ ### Server:
84
+ ```bash
85
+ llama-server --hf-repo NikolayKozloff/Einstein-v7-Qwen2-7B-Q8_0-GGUF --hf-file einstein-v7-qwen2-7b-q8_0.gguf -c 2048
86
+ ```
87
+
88
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
89
+
90
+ Step 1: Clone llama.cpp from GitHub.
91
+ ```
92
+ git clone https://github.com/ggerganov/llama.cpp
93
+ ```
94
+
95
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
96
+ ```
97
+ cd llama.cpp && LLAMA_CURL=1 make
98
+ ```
99
+
100
+ Step 3: Run inference through the main binary.
101
+ ```
102
+ ./llama-cli --hf-repo NikolayKozloff/Einstein-v7-Qwen2-7B-Q8_0-GGUF --hf-file einstein-v7-qwen2-7b-q8_0.gguf -p "The meaning to life and the universe is"
103
+ ```
104
+ or
105
+ ```
106
+ ./llama-server --hf-repo NikolayKozloff/Einstein-v7-Qwen2-7B-Q8_0-GGUF --hf-file einstein-v7-qwen2-7b-q8_0.gguf -c 2048
107
+ ```