Text Generation
Transformers
Safetensors
Korean
llama
text-generation-inference
Inference Endpoints

Yi-Ko-6B-Instruct-v1.1

Model Details

Base Model

beomi/Yi-Ko-6B

Training Dataset

  1. kyujinpy/KOR-OpenOrca-Platypus-v3 πŸ™‡
  2. beomi/KoAlpaca-v1.1a πŸ™‡
  3. maywell/ko_wikidata_QA πŸ™‡
  4. AIHub 데이터 ν™œμš©

Instruction Format

### User:
{instruction}

### Assistant:
{response}

Loading the Model

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("wkshin89/Yi-Ko-6B-Instruct-v1.1")
model = AutoModelForCausalLM.from_pretrained(
    "wkshin89/Yi-Ko-6B-Instruct-v1.1",
    device_map="auto",
    torch_dtype=torch.bfloat16,
)
Downloads last month
65
Safetensors
Model size
6.18B params
Tensor type
BF16
Β·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for wkshin89/Yi-Ko-6B-Instruct-v1.1

Base model

beomi/Yi-Ko-6B
Finetuned
(10)
this model
Quantizations
3 models

Datasets used to train wkshin89/Yi-Ko-6B-Instruct-v1.1