nold commited on
Commit
87cedfb
·
verified ·
1 Parent(s): dc8d27b

Upload folder using huggingface_hub (#1)

Browse files

- 3ebbd6362cfc560bac28df982ee82705b9d8ab2233767a4b791128b3f71e896f (7be879cd70bf54a915b0458de69156a5d4c3a4db)
- 4313bb0e041f0cc380b7d777d672cebef35b86d441426f665a94d95ed7260d40 (eec3c7e092cd8695ba00c832439eb6bd6af6fa7a)
- f597cef01ca22b48f87733a2f5f3baa85a4225aa87406812d968e4fc7712a130 (e2d3d62a65e4157f758725340c73df12e680ff2d)
- 06b95d15cce13f992f59a0b314da0c08ec558b6b4b7f2cc4325a878072d5cc59 (ac6a00c0b3ea927d0331a83bed99a933141c7788)
- 8e41ef606c3da834b6724e26df8e26890666187a525caae13d6f6cabb6604254 (3b2469f787a816c52ffbefe818489aa038a13dce)

.gitattributes CHANGED
@@ -33,3 +33,7 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ BioMistral-7B_Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
37
+ BioMistral-7B_Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
38
+ BioMistral-7B_Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
39
+ BioMistral-7B_Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
BioMistral-7B_Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ddde6ed6c557fb8862b404613aa3ef3432de8ec8cbbb5d9f7c8f40f306011e39
3
+ size 4368439424
BioMistral-7B_Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5327e09bec34d293f6ba9aed08c5e0095b1f9ebc39ed9a52723e858a5ccb9aab
3
+ size 5131409536
BioMistral-7B_Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:805bbc0d7b5e5bcfb4ac28d5321a444a41ec23b9b75afb46f1d07da2a17a6410
3
+ size 5942065280
BioMistral-7B_Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8828211381cd5264404e24fe679c5df67ba2f792fff6b53a207e3d71e2e54237
3
+ size 7695857792
README.md ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - pubmed
5
+ language:
6
+ - fr
7
+ - en
8
+ - de
9
+ - nl
10
+ - es
11
+ - pt
12
+ - pl
13
+ - ro
14
+ - it
15
+ pipeline_tag: text-generation
16
+ tags:
17
+ - medical
18
+ - biology
19
+ ---
20
+
21
+
22
+ <p align="center">
23
+ <img src="https://huggingface.co/BioMistral/BioMistral-7B/resolve/main/wordart_blue_m_rectangle.png?download=true" alt="drawing" width="250"/>
24
+ </p>
25
+
26
+ # BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains
27
+
28
+ **Abstract:**
29
+
30
+ Large Language Models (LLMs) have demonstrated remarkable versatility in recent years, offering potential applications across specialized domains such as healthcare and medicine. Despite the availability of various open-source LLMs tailored for health contexts, adapting general-purpose LLMs to the medical domain presents significant challenges.
31
+ In this paper, we introduce BioMistral, an open-source LLM tailored for the biomedical domain, utilizing Mistral as its foundation model and further pre-trained on PubMed Central. We conduct a comprehensive evaluation of BioMistral on a benchmark comprising 10 established medical question-answering (QA) tasks in English. We also explore lightweight models obtained through quantization and model merging approaches. Our results demonstrate BioMistral's superior performance compared to existing open-source medical models and its competitive edge against proprietary counterparts. Finally, to address the limited availability of data beyond English and to assess the multilingual generalization of medical LLMs, we automatically translated and evaluated this benchmark into 7 other languages. This marks the first large-scale multilingual evaluation of LLMs in the medical domain. Datasets, multilingual evaluation benchmarks, scripts, and all the models obtained during our experiments are freely released.
32
+
33
+ **Advisory Notice!** Although BioMistral is intended to encapsulate medical knowledge sourced from high-quality evidence, it hasn't been tailored to effectively, safely, or suitably convey this knowledge within professional parameters for action. We advise refraining from utilizing BioMistral in medical contexts unless it undergoes thorough alignment with specific use cases and undergoes further testing, notably including randomized controlled trials in real-world medical environments.
34
+
35
+ # 1. BioMistral models
36
+
37
+ **BioMistral** is a suite of Mistral-based further pre-trained open source models suited for the medical domains and pre-trained using textual data from PubMed Central Open Access (CC0, CC BY, CC BY-SA, and CC BY-ND). All the models are trained using the CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/jean-zay/) French HPC.
38
+
39
+ | Model Name | Base Model | Model Type | Sequence Length | Download |
40
+ |:-------------------:|:----------------------------------:|:-------------------:|:---------------:|:-----------------------------------------------------:|
41
+ | BioMistral-7B | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Further Pre-trained | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) |
42
+ | BioMistral-7B-DARE | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge DARE | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE) |
43
+ | BioMistral-7B-TIES | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge TIES | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES) |
44
+ | BioMistral-7B-SLERP | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge SLERP | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP) |
45
+
46
+ # 2. Quantized Models
47
+
48
+ | Base Model | Method | q_group_size | w_bit | version | VRAM GB | Time | Download |
49
+ |:-------------------:|:------:|:------------:|:-----:|:-------:|:-------:|:------:|:--------:|
50
+ | BioMistral-7B | FP16/BF16 | | | | 15.02 | x1.00 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) |
51
+ | BioMistral-7B | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMM) |
52
+ | BioMistral-7B | AWQ | 128 | 4 | GEMV | 4.68 | x10.30 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMV) |
53
+ | BioMistral-7B | BnB.4 | | 4 | | 5.03 | x3.25 | [HuggingFace](blank) |
54
+ | BioMistral-7B | BnB.8 | | 8 | | 8.04 | x4.34 | [HuggingFace](blank) |
55
+ | BioMistral-7B-DARE | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE-AWQ-QGS128-W4-GEMM) |
56
+ | BioMistral-7B-TIES | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES-AWQ-QGS128-W4-GEMM) |
57
+ | BioMistral-7B-SLERP | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP-AWQ-QGS128-W4-GEMM) |
58
+
59
+ # 2. Using BioMistral
60
+
61
+ You can use BioMistral with [Hugging Face's Transformers library](https://github.com/huggingface/transformers) as follow.
62
+
63
+ Loading the model and tokenizer :
64
+
65
+ ```python
66
+ from transformers import AutoModel, AutoTokenizer
67
+
68
+ tokenizer = AutoTokenizer.from_pretrained("BioMistral/BioMistral-7B")
69
+ model = AutoModel.from_pretrained("BioMistral/BioMistral-7B")
70
+ ```
71
+
72
+ # 3. Supervised Fine-tuning Benchmark
73
+
74
+ | | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA | MedQA 5 opts | PubMedQA | MedMCQA | Avg. |
75
+ |-------------------------------------------|:---------------------------------------------:|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|------------------|
76
+ | **BioMistral 7B** | 59.9 | 64.0 | 56.5 | 60.4 | 59.0 | 54.7 | 50.6 | 42.8 | 77.5 | 48.1 | 57.3 |
77
+ | **Mistral 7B Instruct** | **62.9** | 57.0 | 55.6 | 59.4 | 62.5 | <u>57.2</u> | 42.0 | 40.9 | 75.7 | 46.1 | 55.9 |
78
+ | | | | | | | | | | | | |
79
+ | **BioMistral 7B Ensemble** | <u>62.8</u> | 62.7 | <u>57.5</u> | **63.5** | 64.3 | 55.7 | 50.6 | 43.6 | 77.5 | **48.8** | 58.7 |
80
+ | **BioMistral 7B DARE** | 62.3 | **67.0** | 55.8 | 61.4 | **66.9** | **58.0** | **51.1** | **45.2** | <u>77.7</u> | <u>48.7</u> | **59.4** |
81
+ | **BioMistral 7B TIES** | 60.1 | <u>65.0</u> | **58.5** | 60.5 | 60.4 | 56.5 | 49.5 | 43.2 | 77.5 | 48.1 | 57.9 |
82
+ | **BioMistral 7B SLERP** | 62.5 | 64.7 | 55.8 | <u>62.7</u> | <u>64.8</u> | 56.3 | <u>50.8</u> | <u>44.3</u> | **77.8** | 48.6 | <u>58.8</u> |
83
+ | | | | | | | | | | | | |
84
+ | **MedAlpaca 7B** | 53.1 | 58.0 | 54.1 | 58.8 | 58.1 | 48.6 | 40.1 | 33.7 | 73.6 | 37.0 | 51.5 |
85
+ | **PMC-LLaMA 7B** | 24.5 | 27.7 | 35.3 | 17.4 | 30.3 | 23.3 | 25.5 | 20.2 | 72.9 | 26.6 | 30.4 |
86
+ | **MediTron-7B** | 41.6 | 50.3 | 46.4 | 27.9 | 44.4 | 30.8 | 41.6 | 28.1 | 74.9 | 41.3 | 42.7 |
87
+ | **BioMedGPT-LM-7B** | 51.4 | 52.0 | 49.4 | 53.3 | 50.7 | 49.1 | 42.5 | 33.9 | 76.8 | 37.6 | 49.7 |
88
+ | | | | | | | | | | | | |
89
+ | **GPT-3.5 Turbo 1106*** | 74.71 | 74.00 | 65.92 | 72.79 | 72.91 | 64.73 | 57.71 | 50.82 | 72.66 | 53.79 | 66.0 |
90
+
91
+ Supervised Fine-Tuning (SFT) performance of BioMistral 7B models compared to baselines, measured by accuracy (↑) and averaged across 3 random seeds of 3-shot. DARE, TIES, and SLERP are model merging strategies that combine BioMistral 7B and Mistral 7B Instruct. Best model in bold, and second-best underlined. *GPT-3.5 Turbo performances are reported from the 3-shot results without SFT.
92
+
93
+ # Citation BibTeX
94
+
95
+ Arxiv : [https://arxiv.org/abs/2402.10373](https://arxiv.org/abs/2402.10373)
96
+
97
+ ```bibtex
98
+ @misc{labrak2024biomistral,
99
+ title={BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains},
100
+ author={Yanis Labrak and Adrien Bazoge and Emmanuel Morin and Pierre-Antoine Gourraud and Mickael Rouvier and Richard Dufour},
101
+ year={2024},
102
+ eprint={2402.10373},
103
+ archivePrefix={arXiv},
104
+ primaryClass={cs.CL}
105
+ }
106
+ ```
107
+
108
+ **CAUTION!** Both direct and downstream users need to be informed about the risks, biases, and constraints inherent in the model. While the model can produce natural language text, our exploration of its capabilities and limitations is just beginning. In fields such as medicine, comprehending these limitations is crucial. Hence, we strongly advise against deploying this model for natural language generation in production or for professional tasks in the realm of health and medicine.
109
+
110
+
111
+
112
+ ***
113
+
114
+ Quantization of Model [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B).
115
+ Created using [llm-quantizer](https://github.com/Nold360/llm-quantizer) Pipeline
main.log ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [1708357169] Log start
2
+ [1708357169] Cmd: /main -m BioMistral-7B_Q4_K_M.gguf -p "What is a Large Language Model?" -n 512 --temp 1
3
+ [1708357169] main: build = 0 (unknown)
4
+ [1708357169] main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
5
+ [1708357169] main: seed = 1708357169
6
+ [1708357169] main: llama backend init
7
+ [1708357169] main: load the model and apply lora adapter, if any
8
+ [1708357169] llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from BioMistral-7B_Q4_K_M.gguf (version GGUF V3 (latest))
9
+ [1708357169] llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
10
+ [1708357169] llama_model_loader: - kv 0: general.architecture str = llama
11
+ [1708357169] llama_model_loader: - kv 1: general.name str = workspace
12
+ [1708357169] llama_model_loader: - kv 2: llama.context_length u32 = 32768
13
+ [1708357169] llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
14
+ [1708357169] llama_model_loader: - kv 4: llama.block_count u32 = 32
15
+ [1708357169] llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
16
+ [1708357169] llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
17
+ [1708357169] llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
18
+ [1708357169] llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8
19
+ [1708357169] llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
20
+ [1708357169] llama_model_loader: - kv 10: llama.rope.freq_base f32 = 10000.000000
21
+ [1708357169] llama_model_loader: - kv 11: general.file_type u32 = 15
22
+ [1708357169] llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
23
+ [1708357169] llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
24
+ [1708357169] llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [-1000.000000, -1000.000000, -1000.00...
25
+ [1708357169] llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [3, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
26
+ [1708357169] llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
27
+ [1708357169] llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
28
+ [1708357169] llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
29
+ [1708357169] llama_model_loader: - kv 19: tokenizer.chat_template str = {{ bos_token }}{% for message in mess...
30
+ [1708357169] llama_model_loader: - kv 20: general.quantization_version u32 = 2
31
+ [1708357169] llama_model_loader: - type f32: 65 tensors
32
+ [1708357169] llama_model_loader: - type q4_K: 193 tensors
33
+ [1708357169] llama_model_loader: - type q6_K: 33 tensors
34
+ [1708357169] llm_load_vocab: special tokens definition check successful ( 259/32000 ).
35
+ [1708357169] llm_load_print_meta: format = GGUF V3 (latest)
36
+ [1708357169] llm_load_print_meta: arch = llama
37
+ [1708357169] llm_load_print_meta: vocab type = SPM
38
+ [1708357169] llm_load_print_meta: n_vocab = 32000
39
+ [1708357169] llm_load_print_meta: n_merges = 0
40
+ [1708357169] llm_load_print_meta: n_ctx_train = 32768
41
+ [1708357169] llm_load_print_meta: n_embd = 4096
42
+ [1708357169] llm_load_print_meta: n_head = 32
43
+ [1708357169] llm_load_print_meta: n_head_kv = 8
44
+ [1708357169] llm_load_print_meta: n_layer = 32
45
+ [1708357169] llm_load_print_meta: n_rot = 128
46
+ [1708357169] llm_load_print_meta: n_embd_head_k = 128
47
+ [1708357169] llm_load_print_meta: n_embd_head_v = 128
48
+ [1708357169] llm_load_print_meta: n_gqa = 4
49
+ [1708357169] llm_load_print_meta: n_embd_k_gqa = 1024
50
+ [1708357169] llm_load_print_meta: n_embd_v_gqa = 1024
51
+ [1708357169] llm_load_print_meta: f_norm_eps = 0.0e+00
52
+ [1708357169] llm_load_print_meta: f_norm_rms_eps = 1.0e-05
53
+ [1708357169] llm_load_print_meta: f_clamp_kqv = 0.0e+00
54
+ [1708357169] llm_load_print_meta: f_max_alibi_bias = 0.0e+00
55
+ [1708357169] llm_load_print_meta: n_ff = 14336
56
+ [1708357169] llm_load_print_meta: n_expert = 0
57
+ [1708357169] llm_load_print_meta: n_expert_used = 0
58
+ [1708357169] llm_load_print_meta: rope scaling = linear
59
+ [1708357169] llm_load_print_meta: freq_base_train = 10000.0
60
+ [1708357169] llm_load_print_meta: freq_scale_train = 1
61
+ [1708357169] llm_load_print_meta: n_yarn_orig_ctx = 32768
62
+ [1708357169] llm_load_print_meta: rope_finetuned = unknown
63
+ [1708357169] llm_load_print_meta: model type = 7B
64
+ [1708357169] llm_load_print_meta: model ftype = Q4_K - Medium
65
+ [1708357169] llm_load_print_meta: model params = 7.24 B
66
+ [1708357169] llm_load_print_meta: model size = 4.07 GiB (4.83 BPW)
67
+ [1708357169] llm_load_print_meta: general.name = workspace
68
+ [1708357169] llm_load_print_meta: BOS token = 1 '<s>'
69
+ [1708357169] llm_load_print_meta: EOS token = 2 '</s>'
70
+ [1708357169] llm_load_print_meta: UNK token = 0 '<unk>'
71
+ [1708357169] llm_load_print_meta: LF token = 13 '<0x0A>'
72
+ [1708357169] llm_load_tensors: ggml ctx size = 0.11 MiB
73
+ [1708357178] llm_load_tensors: CPU buffer size = 4165.37 MiB
74
+ [1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178] .[1708357178]
75
+ [1708357178] llama_new_context_with_model: n_ctx = 512
76
+ [1708357178] llama_new_context_with_model: freq_base = 10000.0
77
+ [1708357178] llama_new_context_with_model: freq_scale = 1
78
+ [1708357178] llama_kv_cache_init: CPU KV buffer size = 64.00 MiB
79
+ [1708357178] llama_new_context_with_model: KV self size = 64.00 MiB, K (f16): 32.00 MiB, V (f16): 32.00 MiB
80
+ [1708357178] llama_new_context_with_model: CPU input buffer size = 10.01 MiB
81
+ [1708357178] llama_new_context_with_model: CPU compute buffer size = 72.00 MiB
82
+ [1708357178] llama_new_context_with_model: graph splits (measure): 1
83
+ [1708357178] warming up the model with an empty run
84
+ [1708357178] n_ctx: 512
85
+ [1708357178]
86
+ [1708357178] system_info: n_threads = 16 / 32 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 |
87
+ [1708357178] add_bos: 1
88
+ [1708357178] tokenize the prompt
89
+ [1708357178] prompt: "What is a Large Language Model?"
90
+ [1708357178] tokens: [ '':1, ' What':1824, ' is':349, ' a':264, ' Large':23292, ' Lang':13550, 'ua':3772, 'ge':490, ' Model':8871, '?':28804 ]
91
+ [1708357178] recalculate the cached logits (check): embd_inp.empty() false, n_matching_session_tokens 0, embd_inp.size() 10, session_tokens.size() 0, embd_inp.size() 10
92
+ [1708357178] inp_pfx: [ '':1, ' ':28705, '':13, '':13, '###':27332, ' Inst':3133, 'ruction':3112, ':':28747, '':13, '':13 ]
93
+ [1708357178] inp_sfx: [ ' ':28705, '':13, '':13, '###':27332, ' Response':12107, ':':28747, '':13, '':13 ]
94
+ [1708357178] cml_pfx: [ '':1, ' ':28705, '':13, '<':28789, '|':28766, 'im':321, '_':28730, 'start':2521, '|':28766, '>':28767, 'user':1838, '':13 ]
95
+ [1708357178] cml_sfx: [ ' <':523, '|':28766, 'im':321, '_':28730, 'end':416, '|':28766, '>':28767, '':13, '<':28789, '|':28766, 'im':321, '_':28730, 'start':2521, '|':28766, '>':28767, 'ass':489, 'istant':11143, '':13 ]
96
+ [1708357178] sampling:
97
+ repeat_last_n = 64, repeat_penalty = 1.100, frequency_penalty = 0.000, presence_penalty = 0.000
98
+ top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 1.000
99
+ mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
100
+ [1708357178] sampling order:
101
+ CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature
102
+ [1708357178] generate: n_ctx = 512, n_batch = 512, n_predict = 512, n_keep = 0
103
+ [1708357178]
104
+
105
+ [1708357178] embd_inp.size(): 10, n_consumed: 0
106
+ [1708357178] eval: [ '':1, ' What':1824, ' is':349, ' a':264, ' Large':23292, ' Lang':13550, 'ua':3772, 'ge':490, ' Model':8871, '?':28804 ]
107
+ [1708357178] n_past = 10
108
+ [1708357178] sampled token: 2: ''
109
+ [1708357178] last: [ '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':0, '':1, ' What':1824, ' is':349, ' a':264, ' Large':23292, ' Lang':13550, 'ua':3772, 'ge':490, ' Model':8871, '?':28804, '':2 ]
110
+ [1708357178] n_remain: 511
111
+ [1708357178] found EOS token
112
+ [1708357178] [end of text]
113
+ [1708357178]
114
+ [1708357178] llama_print_timings: load time = 8832.76 ms
115
+ [1708357178] llama_print_timings: sample time = 0.36 ms / 1 runs ( 0.36 ms per token, 2808.99 tokens per second)
116
+ [1708357178] llama_print_timings: prompt eval time = 369.59 ms / 10 tokens ( 36.96 ms per token, 27.06 tokens per second)
117
+ [1708357178] llama_print_timings: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
118
+ [1708357178] llama_print_timings: total time = 370.34 ms / 11 tokens
119
+ [1708357178] Log end