English

Dolphin 2.2.1 (Finetune of Mistral 7B) compiled for WebGPU - q4f32_1

Demo

You can access this model over the browser here.

Description

This is a quantized version of Dolphin 2.1 🐬, one of the best finetunes of Mistral-7b out there, ready to be used for on-browser inference over WebGPU.

Compiled with mlc-llm.

Very helpful direction provided by felladrin!

Prompt template: Dolphin

Prompt format: This model (and all my future releases) use ChatML prompt format.

<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

Example:

<|im_start|>system
you are an expert dolphin trainer<|im_end|>
<|im_start|>user
What is the best way to train a dolphin to obey me?  Please answer step by step.<|im_end|>
<|im_start|>assistant
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for hrishioa/mlc-chat-dolphin-2.2.1-mistral-7b-q4f32_1

Finetuned
(9)
this model

Datasets used to train hrishioa/mlc-chat-dolphin-2.2.1-mistral-7b-q4f32_1