aifeifei798's picture
Upload 40 files
04dd406 verified
|
raw
history blame
5.53 kB
---
license: llama3
language:
- en
- ja
- zh
tags:
- roleplay
- llama3
- sillytavern
- idol
---
# Special Thanks:
- Lewdiculous's superb gguf version, thank you for your conscientious and responsible dedication.
- https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-1.2-GGUF-IQ-Imatrix-Request
# Model Description:
The module combination has been readjusted to better fulfill various roles and has been adapted for mobile phones.
- DarkIdol:Roles that you can imagine and those that you cannot imagine.
- Roleplay
- Specialized in various role-playing scenarios
- more look at test role. (https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.2/resolve/main/test)
- more look at LM Studio presets (https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.2/resolve/main/config-presets)
![image/png](https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.2/resolve/main/llama3-8B-DarkIdol-1.2.png)
# Chang Log
### 2024-06-24
- 中文,日文,韩文重新调整,能够更好的输出中文,日文,韩文
- Chinese, Japanese, and Korean have been readjusted to better output Chinese, Japanese, and Korean.
- 问题:对图像识别准确率下降,解决办法,使用https://huggingface.co/spaces/aifeifei798/Florence-2-base来处理图像描述,感谢microsoft/Florence-2带来的图像识别,速度快并准确,输出格式多.https://huggingface.co/spaces/gokaygokay/Florence-2 是快速版本,感谢gokaygokay做的应用
- Issue: The accuracy of image recognition has decreased. Solution: Use https://huggingface.co/spaces/aifeifei798/Florence-2-base to process image descriptions. Thank you to microsoft/Florence-2 for bringing fast and accurate image recognition with multiple output formats. https://huggingface.co/spaces/gokaygokay/Florence-2 is the fast version, thanks to gokaygokay for the application.
### 2024-06-20
- Using the underlying model.(Meta-Llama-3-8B-Instruct)
- Integrating the numerous models I previously created.look at base_model.
# Stop Strings
```python
stop = [
"## Instruction:",
"### Instruction:",
"<|end_of_text|>",
" //:",
"</s>",
"<3```",
"### Note:",
"### Input:",
"### Response:",
"### Emoticons:"
],
```
# Model Use
- Koboldcpp https://github.com/LostRuins/koboldcpp
- Since KoboldCpp is taking a while to update with the latest llama.cpp commits, I'll recommend this [fork](https://github.com/Nexesenex/kobold.cpp) if anyone has issues.
- LM Studio https://lmstudio.ai/
- llama.cpp https://github.com/ggerganov/llama.cpp
- Backyard AI https://backyard.ai/
- Meet Layla,Layla is an AI chatbot that runs offline on your device.No internet connection required.No censorship.Complete privacy.Layla Lite https://www.layla-network.ai/
- Layla Lite llama3-8B-DarkIdol-1.1-Q4_K_S-imat.gguf https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-1.2-GGUF-IQ-Imatrix-Request/blob/main/llama3-8B-DarkIdol-1.2-Q4_K_S-imat.gguf?download=true
- more gguf at https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-1.2-GGUF-IQ-Imatrix-Request
# character
- https://character-tavern.com/
- https://characterhub.org/
- https://pygmalion.chat/
- https://aetherroom.club/
- https://backyard.ai/
- Layla AI chatbot
### If you want to use vision functionality:
* You must use the latest versions of [Koboldcpp](https://github.com/Nexesenex/kobold.cpp).
### To use the multimodal capabilities of this model and use **vision** you need to load the specified **mmproj** file, this can be found inside this model repo. [Llava MMProj](https://huggingface.co/Nitral-AI/Llama-3-Update-3.0-mmproj-model-f16)
* You can load the **mmproj** by using the corresponding section in the interface:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/UX6Ubss2EPNAT3SKGMLe0.png)
### Thank you:
To the authors for their hard work, which has given me more options to easily create what I want. Thank you for your efforts.
- Hastagaras
- Gryphe
- cgato
- ChaoticNeutrals
- mergekit
- merge
- transformers
- llama
- Nitral-AI
- MLP-KTLim
- rinna
- hfl
- .........
---
base_model:
- aifeifei798/Meta-Llama-3-8B-Instruct
- MLP-KTLim/llama-3-Korean-Bllossom-8B
- aifeifei798/llama3-8B-DarkIdol-1.1
- rinna/llama-3-youko-8b
- hfl/llama-3-chinese-8b-instruct-v3
library_name: transformers
tags:
- mergekit
- merge
---
# llama3-8B-DarkIdol-1.2
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [aifeifei798/Meta-Llama-3-8B-Instruct](https://huggingface.co/aifeifei798/Meta-Llama-3-8B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B)
* [aifeifei798/llama3-8B-DarkIdol-1.1](https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.1)
* [rinna/llama-3-youko-8b](https://huggingface.co/rinna/llama-3-youko-8b)
* [hfl/llama-3-chinese-8b-instruct-v3](https://huggingface.co/hfl/llama-3-chinese-8b-instruct-v3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: hfl/llama-3-chinese-8b-instruct-v3
- model: rinna/llama-3-youko-8b
- model: MLP-KTLim/llama-3-Korean-Bllossom-8B
- model: aifeifei798/llama3-8B-DarkIdol-1.1
merge_method: model_stock
base_model: aifeifei798/Meta-Llama-3-8B-Instruct
dtype: bfloat16
```