How to use

from transformers import AutoModelForCausalLM, AutoTokenizer, TextGenerationPipeline
model_path = 'fiveflow/KoLlama-3-8B-Instruct'
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, 
                                                  device_map="auto",
                                                #   load_in_4bit=True,
                                                  low_cpu_mem_usage=True)

pipe = TextGenerationPipeline(model = model, tokenizer = tokenizer)
Downloads last month
2,168
Safetensors
Model size
8.03B params
Tensor type
FP16
Β·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for fiveflow/KoLlama-3-8B-Instruct

Finetuned
(544)
this model
Quantizations
3 models

Spaces using fiveflow/KoLlama-3-8B-Instruct 6