Model Card for mncai/agiin-11.1B-v0.1

Introduction of MindsAndCompany

https://mnc.ai/

We create various AI models and develop solutions that can be applied to businesses. And as for generative AI, we are developing products like Code Assistant, TOD Chatbot, LLMOps, and are in the process of developing Enterprise AGI (Artificial General Intelligence).

Model Summary

This model was built based on the Mistral architecture. It was inspired by neural connection technology and rehabilitation therapy. I have created a new model architecture that does not require pretraining, and training the model is sufficient with just one H100 for 7 hours.

Data

Intel/orca_dpo_pairs (DPO)

Surgery and Training

stack mistral 50 layers and DPO.

How to Use

message = [
    {"role": "system", "content": "You are a helpful assistant chatbot."},
    {"role": "user", "content": "๋‘ ๊ฐœ์˜ ๊ตฌ๊ฐ€ ๊ฐ๊ฐ ์ง€๋ฆ„์ด 1, 2์ผ๋•Œ ๋‘ ๊ตฌ์˜ ๋ถ€ํ”ผ๋Š” ๋ช‡๋ฐฐ์ง€? ์„ค๋ช…๋„ ๊ฐ™์ด ํ•ด์ค˜."}
]
tokenizer = AutoTokenizer.from_pretrained(hf_model)
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)

pipeline = transformers.pipeline(
    "text-generation",
    model=hf_model,
    tokenizer=tokenizer
)


sequences = pipeline(
    prompt,
    do_sample=True,
    temperature=0.7,
    top_p=0.9,
    num_return_sequences=1,
    max_length=512,
)
print(sequences[0]['generated_text'])

Contact

If you have any questions, please raise an issue or contact us at [email protected]

Downloads last month
13
Safetensors
Model size
11.2B params
Tensor type
FP16
ยท
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Dataset used to train mncai/agiin-11.1B-v0.1