|
--- |
|
license: apache-2.0 |
|
language: |
|
- fr |
|
pipeline_tag: text-generation |
|
library_name: transformers |
|
tags: |
|
- LLM |
|
inference: false |
|
--- |
|
[]() |
|
|
|
## I am still building the structure of these descriptions. |
|
|
|
These will contain increasingly more content to help find the best models for a purpose. |
|
|
|
# vigogne-falcon-7b-chat - GGUF |
|
- Model creator: [bofenghuang](https://huggingface.co/bofenghuang) |
|
- Original model: [vigogne-falcon-7b-chat](https://huggingface.co/bofenghuang/vigogne-falcon-7b-chat) |
|
|
|
Vigogne-Falcon-7B-Chat is a Falcon-7B model fine-tuned to conduct multi-turn dialogues in French between human user and AI assistant. |
|
|
|
|
|
|
|
# About GGUF format |
|
|
|
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library. |
|
A growing list of Software is using it and can therefore use this model. |
|
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov |
|
|
|
# Quantization variants |
|
|
|
There is a bunch of quantized files available. How to choose the best for you: |
|
|
|
# legacy quants |
|
|
|
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types. |
|
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants. |
|
Falcon 7B models cannot be quantized to K-quants. |
|
|
|
# K-quants |
|
|
|
K-quants are based on the idea that the quantization of certain parts affects the quality in different ways. If you quantize certain parts more and others less, you get a more powerful model with the same file size, or a smaller file size and lower memory load with comparable performance. |
|
So, if possible, use K-quants. |
|
With a Q6_K you should find it really hard to find a quality difference to the original model - ask your model two times the same question and you may encounter bigger quality differences. |
|
|
|
|
|
|
|
# Original Model Card: |
|
<p align="center" width="100%"> |
|
<img src="https://huggingface.co/bofenghuang/vigogne-falcon-7b-chat/resolve/main/vigogne_logo.png" alt="Vigogne" style="width: 40%; min-width: 300px; display: block; margin: auto;"> |
|
</p> |
|
|
|
# Vigogne-Falcon-7B-Chat: A French Chat Falcon Model |
|
|
|
Vigogne-Falcon-7B-Chat is a [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) model fine-tuned to conduct multi-turn dialogues in French between human user and AI assistant. |
|
|
|
For more information, please visit the Github repo: https://github.com/bofenghuang/vigogne |
|
|
|
## Changelog |
|
|
|
All versions are available in branches. |
|
|
|
- **V1.0**: Initial release. |
|
- **V2.0**: Expanded training dataset to 419k for better performance. |
|
|
|
## Usage |
|
|
|
```python |
|
import torch |
|
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig |
|
from vigogne.preprocess import generate_inference_chat_prompt |
|
|
|
model_name_or_path = "bofenghuang/vigogne-falcon-7b-chat" |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, padding_side="right", use_fast=False) |
|
tokenizer.pad_token = tokenizer.eos_token |
|
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_name_or_path, |
|
torch_dtype=torch.float16, |
|
device_map="auto", |
|
trust_remote_code=True, |
|
) |
|
|
|
user_query = "Expliquez la différence entre DoS et phishing." |
|
prompt = generate_inference_chat_prompt([[user_query, ""]], tokenizer=tokenizer) |
|
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(model.device) |
|
input_length = input_ids.shape[1] |
|
|
|
generated_outputs = model.generate( |
|
input_ids=input_ids, |
|
generation_config=GenerationConfig( |
|
temperature=0.1, |
|
do_sample=True, |
|
repetition_penalty=1.0, |
|
max_new_tokens=512, |
|
), |
|
return_dict_in_generate=True, |
|
pad_token_id=tokenizer.eos_token_id, |
|
eos_token_id=tokenizer.eos_token_id, |
|
) |
|
generated_tokens = generated_outputs.sequences[0, input_length:] |
|
generated_text = tokenizer.decode(generated_tokens, skip_special_tokens=True) |
|
print(generated_text) |
|
``` |
|
|
|
<!-- You can infer this model by using the following Google Colab Notebook. |
|
|
|
<a href="https://colab.research.google.com/github/bofenghuang/vigogne/blob/main/notebooks/infer_instruct.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> --> |
|
|
|
## Limitations |
|
|
|
Vigogne is still under development, and there are many limitations that have to be addressed. Please note that it is possible that the model generates harmful or biased content, incorrect information or generally unhelpful answers.<center> |
|
[](https://maddes8cht.github.io) |
|
[](https://stackexchange.com/users/26485911) |
|
[](https://github.com/maddes8cht) |
|
[](https://huggingface.co/maddes8cht) |
|
[](https://twitter.com/maddes1966) |
|
</center> |