Original Model Card
Uploaded model
- Developed by: EpistemeAI
- License: llama3.1
- Finetuned from model : unsloth/Meta-Llama-3.1-8B-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "EpistemeAI/TuneLlama-3.1-8B-GGUF",
max_seq_length = 8192,
load_in_4bit = True,
#token = "hf-xxxx", # use one if using gated models like meta-llama/Llama-2-7b-hf
)
from transformers import TextStreamer
from unsloth.chat_templates import get_chat_template
tokenizer = get_chat_template(
tokenizer,
chat_template = "llama-3",
mapping = {"role" : "from", "content" : "value", "user" : "human", "assistant" : "gpt"}, # ShareGPT style
)
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
messages = [
# EDIT HERE!
{"from": "human", "value": "Continue the fibonnaci sequence: 1, 1, 2, 3, 5, 8,"},
]
inputs = tokenizer.apply_chat_template(messages, tokenize = True, add_generation_prompt = True, return_tensors = "pt").to("cuda")
text_streamer = TextStreamer(tokenizer)
_ = model.generate(input_ids = inputs, streamer = text_streamer, max_new_tokens = 1024, use_cache = True)
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model’s pipeline type.
Model tree for EpistemeAI/TuneLlama-3.1-8B-GGUF
Base model
meta-llama/Llama-3.1-8B
Quantized
unsloth/Meta-Llama-3.1-8B-bnb-4bit