|
--- |
|
library_name: transformers |
|
tags: [] |
|
--- |
|
|
|
# koalpaca-polyglot-PEFT-ex |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
This model is the result of the PEFT study. |
|
The study was conducted to understand how to fine-tune the open-source LLM using QLoRA. |
|
|
|
This model is not made for the general performance. |
|
So, it is not recommended to use it in practice. |
|
|
|
|
|
## Model Details |
|
|
|
This model is trained to answer the question about a certain library. |
|
The model is finetuned from "KoAlpaca". |
|
It only trained 0.099% of parameters, using the QLoRA method. |
|
Even Though it trained a small number of parameters, the model gives an actual answer about the data that was trained. |
|
|
|
## Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
- **Developed by:** [Chae Minsu](https://github.com/chaeminsoo) |
|
- **Model type:** Text Generation |
|
- **Language(s) (NLP):** Korean |
|
- **Finetuned from model:** [beomi/polyglot-ko-12.8b-safetensors](https://huggingface.co/beomi/polyglot-ko-12.8b-safetensors) |
|
- **Training Data:** Snippet of Kyungpook National University Library FAQ |