File size: 1,102 Bytes
44657c5 d102071 44657c5 eb4e457 44657c5 eb4e457 44657c5 eb4e457 44657c5 eb4e457 44657c5 eb4e457 44657c5 eb4e457 44657c5 eb4e457 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
---
library_name: transformers
tags: []
---
# koalpaca-polyglot-PEFT-ex
<!-- Provide a quick summary of what the model is/does. -->
This model is the result of the PEFT study.
The study was conducted to understand how to fine-tune the open-source LLM using QLoRA.
This model is not made for the general performance.
So, it is not recommended to use it in practice.
## Model Details
This model is trained to answer the question about a certain library.
The model is finetuned from "KoAlpaca".
It only trained 0.099% of parameters, using the QLoRA method.
Even Though it trained a small number of parameters, the model gives an actual answer about the data that was trained.
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Chae Minsu](https://github.com/chaeminsoo)
- **Model type:** Text Generation
- **Language(s) (NLP):** Korean
- **Finetuned from model:** [beomi/polyglot-ko-12.8b-safetensors](https://huggingface.co/beomi/polyglot-ko-12.8b-safetensors)
- **Training Data:** Snippet of Kyungpook National University Library FAQ |