|
--- |
|
license: other |
|
base_model: meta-llama/Meta-Llama-3-8B |
|
library_name: transformers |
|
tags: |
|
- 4-bit |
|
- AWQ |
|
- text-generation |
|
- autotrain_compatible |
|
- endpoints_compatible |
|
- generated_from_trainer |
|
pipeline_tag: text-generation |
|
inference: false |
|
quantized_by: Suparious |
|
datasets: |
|
- cognitivecomputations/Dolphin-2.9 |
|
- teknium/OpenHermes-2.5 |
|
- m-a-p/CodeFeedback-Filtered-Instruction |
|
- cognitivecomputations/dolphin-coder |
|
- cognitivecomputations/samantha-data |
|
- HuggingFaceH4/ultrachat_200k |
|
- microsoft/orca-math-word-problems-200k |
|
- abacusai/SystemChat-1.1 |
|
- Locutusque/function-calling-chatml |
|
- internlm/Agent-FLAN |
|
--- |
|
# cognitivecomputations/dolphin-2.9-llama3-8b AWQ |
|
|
|
Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations |
|
|
|
Discord: https://discord.gg/8fbBeC7ZGx |
|
|
|
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" /> |
|
|
|
My appreciation for the sponsors of Dolphin 2.9: |
|
- [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 10xL40S node |
|
|
|
This model is based on Llama-3-8b, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE) |
|
|
|
The base model has 8k context, and the full-weight fine-tuning was with 4k sequence length. |
|
|
|
It took 2.5 days on 8x L40S provided by Crusoe Cloud |
|
|
|
This model was trained FFT on all parameters, using ChatML prompt template format. |
|
|
|
example: |
|
|
|
``` |
|
<|im_start|>system |
|
You are Dolphin, a helpful AI assistant.<|im_end|> |
|
<|im_start|>user |
|
{prompt}<|im_end|> |
|
<|im_start|>assistant |
|
|
|
``` |
|
|