|
--- |
|
license: apache-2.0 |
|
language: |
|
- ca |
|
- es |
|
- en |
|
tags: |
|
- RAG |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
# FLOR-6.3B Model optimized for Retrieval Augmented Generation |
|
|
|
|
|
## Table of Contents |
|
<details> |
|
<summary>Click to expand</summary> |
|
|
|
- [Model description](#model-description) |
|
- [Intended uses and limitations](#intended-uses-and-limitations) |
|
- [How to use](#how-to-use) |
|
- [Limitations and bias](#limitations-and-bias) |
|
- [Training](#training) |
|
- [Evaluation](#evaluation) |
|
- [Additional information](#additional-information) |
|
|
|
</details> |
|
|
|
## Model description |
|
|
|
**FlorRAG** is a 6.3B-parameter transformer-based causal language model for Catalan, Spanish, and English, trained on a customized QA dataset from various sources especifically to be used in RAG (Retrieval-Aumented Generation) Applications. |
|
The dataset used to fine tune the model is: [projecte-aina/RAG_Multilingual](https://huggingface.co/datasets/projecte-aina/RAG_Multilingual) |
|
## Intended uses and limitations |
|
|
|
The **FlorRAG** model is ready-to-use for RAG applications optimized for Catalan, English and Spanish languages. |
|
It can perform text-generation in the context of RAG applications. |
|
|
|
## How to use |
|
```python |
|
import torch |
|
from transformers import pipeline |
|
|
|
pipe = pipeline("text-generation", model="projecte-aina/FlorRAG") |
|
|
|
instruction = "Quants habitants t茅 Matar贸?" |
|
|
|
context = "Matar贸 茅s una ciutat de Catalunya, capital de la comarca del Maresme. Situada al litoral mediterrani, a uns 30 km al nord-est de Barcelona, ha estat tradicionalment un centre administratiu de rellev脿ncia territorial i un pol de dinamisme econ貌mic. Compta amb prop de 130.000 habitants, essent actualment la vuitena poblaci贸 del Principat i la tretzena dels Pa茂sos Catalans. " |
|
|
|
# We need to format the prompt and context using ### and \n |
|
|
|
def givePrediction(instruction, context, max_new_tokens=50, repetition_penalty=1.2, top_k=50, top_p=0.95, do_sample=True, temperature=0.5): |
|
text = f"### Instruction\n{{instruction}}\n### Context\n{{context}}\n### Answer\n" |
|
response = pipe(text.format(instruction=instruction, context=context),temperature=temperature,repetition_penalty=repetition_penalty, max_new_tokens=max_new_tokens,top_k=top_k, top_p=top_p, do_sample=do_sample)[0]["generated_text"] |
|
answer = response.split("###")[-1][8:-1] |
|
return answer |
|
|
|
answer = givePrediction(instruction, context) |
|
|
|
print(answer) |
|
'Matar贸 t茅 una poblaci贸 de 130.000 habitants. Aquesta ciutat catalana, situada al litoral mediterrani, a uns 30 km al nord-est de Barcelona, ha estat hist貌ricament un important centre administratiu i un important pol de dinamisme econ貌mic.' |
|
|
|
``` |
|
|
|
## Limitations and bias |
|
At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. |
|
However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques |
|
on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. |
|
|
|
|
|
## Training |
|
|
|
|
|
### Instruction Data |
|
|
|
The training corpus is composed of 56.406 RAG instruction following examples, created from extractive QA instances that were used as 'kernels of truth' to prompt a Mixture of expert model to generate more human-like and complete answers, based on a context. See Data Card at [projecte-aina/RAG_Multilingual](https://huggingface.co/datasets/projecte-aina/RAG_Multilingual). |
|
|
|
## Additional information |
|
|
|
### Author |
|
The Language Technologies Unit from Barcelona Supercomputing Center. |
|
|
|
### Contact |
|
For further information, please send an email to <[email protected]>. |
|
|
|
### Copyright |
|
Copyright(c) 2024 by Language Technologies Unit, Barcelona Supercomputing Center. |
|
|
|
### License |
|
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) |
|
|
|
### Funding |
|
This work/research has been promoted and financed by the Government of Catalonia through the [Aina project](https://projecteaina.cat/). |
|
|
|
### Disclaimer |
|
|
|
<details> |
|
<summary>Click to expand</summary> |
|
|
|
The model published in this repository is intended for a generalist purpose and is available to third parties under a permissive Apache License, Version 2.0. |
|
|
|
Be aware that the model may have biases and/or any other undesirable distortions. |
|
|
|
When third parties deploy or provide systems and/or services to other parties using this model (or any system based on it) |
|
or become users of the model, they should note that it is their responsibility to mitigate the risks arising from its use and, |
|
in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. |
|
|
|
In no event shall the owner and creator of the model (Barcelona Supercomputing Center) |
|
be liable for any results arising from the use made by third parties. |
|
|
|
</details> |
|
|